Can artificial intelligence compete with human intelligence?
It’s a good question, and one that is being asked more and more as AI technology becomes more common in our lives. Many of us interact with artificial intelligence all the time without even realizing it. The algorithms that recommend shows we might like on Netflix or figure out the fastest route to the park are all based on AI.
But the technology is not as advanced as some are claiming, according to one scientist. Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute for theoretical research. She gave a presentation Monday at UC Merced about the limits of current artificial intelligence, and what obstacles the technology faces in the coming years.
According to Mitchell, proponents of artificial intelligence have been overly optimistic. The technology is not as advanced as many people might think it is, she said. The way we think and act as people is informed by years of observing the world around us and making connections between things. This intuition is not easily understood even by us humans, let alone computers.
“People are overconfident [about AI] because I think they don’t understand how complex the human brain is,” Mitchell said.
For example, she pointed out that the majority of accidents with driverless cars involve being rear-ended. The object-recognition technology is so advanced that an autonomous car might be able to identify a stop sign from hundreds of feet away, yet not understand that it can’t just slam on the brakes at any moment in a crowded parking lot. The question for AI researchers is, in a world of infinite complexity, how can you teach a machine to intuitively understand when it’s appropriate to slam on the brakes?
“These systems can do very impressive things,” Mitchell said. “But the things that are easy for us [they can’t do]. It really lacks common sense.”
It’s not a huge problem for AI whose job is to, say, identify watermelons in photos of picnics. But for technology involving public safety, researchers have to get it right. And Mitchell acknowledged that human drivers make mistakes all the time. One audience member asked how much safer an AI driver would need to be over a human driver for the technology to be worth the risk. There wasn’t a clear cut answer. And that’s why the field of AI research, Mitchell said, is so interesting.
Another problem with artificial intelligence is that they frequently arrive at the right answer, but for the wrong reason. For instance, researchers gave an AI a set of photos depicting landscapes as well as a set of photos of birds. The AI was able to identify the birds and the landscapes, but only because of a fluke. All the photos of the birds had a blurred background, while the landscapes did not. So, instead of recognizing the birds themselves, the AI simply looked at the blurred background and called it “bird.”
“The system will give the right answer, but for reasons that aren’t the same as what a human would give,” Mitchell said.
We are still in the early stages of AI when it comes to true intelligence, Mitchell says. But with the technology seeing huge investment from corporations, it could come sooner rather than later.