Today, the term artificial intelligence, or A.I., has become widely used in the media, often in connection with virtual assistants like Siri, Google Assistant or Alexa, but all of these assistants truly intelligent.
We happen to know what people expect from a truly intelligent assistant in one of our studies. We asked people to pretend they had a perfect, intelligent assistant and track activities that they would expect this assistant to help them. We discovered that they wanted everything that they may want from an observant human assistant, for example, notice that they have not taken a break for a while and remind them to get up and then more. For instance, alert them if someone stole their identity online. Today’s virtual assistants are intelligent in that they can speak and understand some human language. They also have some limited agency. They can make unprompted suggestions for directions to work in the morning, or they can alert you to leave early to get to an appointment in time. They excel at simple one step tasks trivia questions whether timers or reminders turning on lights. They were optimized for navigation and directions, and they are most useful when their hands are busy in a car in the kitchen. But there are many ways in which these devices are not intelligent. First, they’re not very good at understanding real language. For example, multiclass sentences such as find a status of an American flight from San Francisco to Vancouver. That leaves today at 455 p.m. They’re not able to carry out a conversation and the sentences must be self sufficient with no pronouns or other implicit reference. Most of the time, they don’t always get polite language, accents and hesitations. Repeated phrases. A pause in a sentence often is misinterpreted as the end of it. They’re not at all good at research tasks, putting together multiple sources of information or multi-stop sequences such as text my next meeting that I will be 10 minutes late or remind me to call the restaurant when it opens. And the agency component is quite weak. There’s very little learning that is being done based on observed user behavior patterns and almost no suggestions.
The greatest danger in believing that these devices are truly intelligent is thought of the mental model that gets formed of A.I. People learn very quickly that they need to behave in a certain way to get results from these assistants and that these assistants are good only at certain types of limited tasks. They adjust behavior and avoid other activities. As a result, they’re happy without exploring the potential that a technology has to offer. So one day when these systems get better and they will, people may not discover the great features of these assistants.