How does that machine make an intelligent decision?

Reading to your kids at night is a pleasure. I didn’t realize how much my kids loved it and how much I loved it as well. For some reason, I haven’t been doing as much reading as I would like, both to the kids at night and in general. So, it was time for a change and I took on a challenge.

So, I brushed up this book and mentioned that I would be reading this to them every night.

Both kids got excited and huddled straight into bed. And, of we started on this journey. The plan is to read a few pages every day so by the end of the year, we have finished this book. Yes, that’s slow but that’s intentional. There is so much to pack in the history of various civilizations. I will report on what we learned towards the end of the school year.

Back to the main story, “How does that machine make an intelligent decision?”. So, as I started reading the book, I realized that I needed more light and called out to the Google device, “Hey Google, change the brightness to 30-40%”. You see what’s interesting here? For me, its perfectly reasonable to ask for brightness that is 30-40% and all humans will deal with impreciseness without any issues. How about a machine, though? How does a machine deal with 30-40%? Should it ask a clarifying question? If it did, would it bother me and frustrate me? Should it just take an average and turn it up to 35%? Should it go to 30% given its night time and then prompt me if I needed to turn it up even more?

See, these decisions that come to us so naturally are not so natural for machines. Instructions that might sound simple may have many downstream decisions. That is not easy to preconceive.

We keep talking about how machines make human life better with better decision making. But, in every bit of information that we receive back from the machine(s) to help us with decision making, there are a lot of decisions that go into creating those branches. Not an easy job, for a product manager.

So, it got me thinking and got me googling for various articles. One of them happened to be this one (it is mostly an informational piece about expectations in a temporal expressions):

Managing Uncertainty in Time Expressions for Virtual Assistants

At the end of the articles, it lists what humans would want from a virtual assistance (in context of managing uncertainty in time expressions), but I think those expectations might go even beyond the scope of the paper. Here are some of the expectations:

  • Implied flexibility
  • Implied constraints
  • Complex expressions
  • Respect uncertainty
  • Recognize uncertainty
  • Embrace flexibility
  • Notify intelligently
  • Leverage implicit knowledge

You can read more about it in the paper, but its great thought process to keep in mind when designing for such systems. I have been researching and haven’t come across many articles (yet) that describe how that that uncertainty is coded into the system? Does it have to be rule-based? Does it have to be derived from the order of the words? What additional context can be used?

  • Time of Day? – If nighttime, choose lower end of the range and reverse for daytime?
  • Previous brightness level? – If the lights are already at 30% and there is a request to change something, don’t keep the brightness at 30% unless mentioned specifically.

You can see the types of information humans can use in context comes to naturally and it is very hard to programmatic embed those contextual systems into a machine. But, I wouldn’t be surprised when it is done.

Reading the book “Super Intelligence” as we seek to find some answers.

Please comment and let me know if you got some interesting papers for me to read.

Advertisement

Papa, but I don’t want the computers to be smarter than us

So, I have been reading a bit about 4IR, artificial intelligence and a few related topics and obviously some of that conversation seeps in when I am having some chats with my kids.

The other day, we were chatting about how advancements in technology can make the life of humans easier and I believe it was a interesting paragraph from the book “AI Superpowers” by Kai-Fu Lee that I might have been reading to them (I thought they would find that bit amusing) when my 10 yr old goes

“but Papa, I don’t want the computers to be smarter than us”

I had to pause, take a moment and break away from the book for a bit and then have a chat with them and we talked about how computers are already better than humans at many things. We talked about being smart or intelligent is very subjective. How we tell a computer to do something for us will decide what it will end up doing.

The conversation got a bit awkward and it sort of escaped the initial intent of her question.

It’s been on mind since

We sometimes talk about a few other elements of artificial intelligence and computer algorithms
but I have been thinking of making those concepts more accessible to my kids so they can grasp the concepts in a way that when they grow up, they are a bit better equipped to understand the positive and adverse impacts of technology. Ultimately, (I hope) affect the world in more positive ways by taking some time to think about application of technology to better the human experience.

Unlike, what many social networks have done...

Please feel free to connect with me to share how your conversations with your kids are shaping up as it relates to technology