Imagine a future in which robots screen job candidates, universities introduce artificially intelligent tutors into classrooms and news services use a combination of social media and artificial intelligence (AI) to roll out breaking news.
Well, that future is now.
Preliminary success and our fascination with computers are leading to the exploration of a myriad of applications for artificial intelligence. Such is the interest, that the French government will spend $1.85 billion over the next five years to support research in the field.
But, there are some serious limitations to AI, says Gary Smith, Pomona’s Fletcher Jones Professor of Economics and author of the upcoming book The AI Delusion. “Thus far, artificial intelligence is designed to perform narrowly defined tasks, and it does it really well,” says Smith. “But moving outside of those tasks, computers have a lot of trouble. It is particularly evident when it requires knowledge of what you’re doing.”
Smith argues that artificial intelligence still lacks integrative thinking and has trouble deciphering meaning or patterns without context. He adds that in order to improve AI, researchers are studying how to get computers to think more like human brains, including research into how children learn.
“Our fascination with computers has led us to believe that artificial intelligence can make smarter decisions than humans,” says Smith.
This is worrisome when AI may be used for algorithmic criminology, for example. Courts all over the country are using computer models to make bail, prison-sentence and parole decisions based on statistical patterns that may be merely coincidental, but cannot be evaluated because they are hidden inside black boxes.
“At this point of development of AI, we should be very skeptical of turning important decisions to computers,” says Smith.
“The danger is not that computers are smarter than us. The real danger is that we think computers are smarter than us. And that’s not the case.”