From the start, robots were outplaying humans in algebra. Recently they’ve begun to lead in chess and other games. They are good with face recognition, too. But what about comparing them to humans in more general terms?
Artificial intelligence solves SAT geometry Qs as well as US 11th-grade students http://t.co/tOY4szDafO pic.twitter.com/LnmavzE8pI
— Republic of Math (@republicofmath) September 28, 2015
The results they came up with are just astonishing. It appears that modern AI systems have reached the level of an average 4-year-old kid.
The verbal part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) was tested on the ConceptNet 4 AI system. The results were recently published at arXiv.org.
“We chose the WPPSI-III because we expected some of its subtests to highlight limitations of current AI systems, as opposed to some PAI [Psychometric Artificial Intelligence] work with other psychometric tests of verbal abilities that highlights the progress that AI systems have made over the decades,” the paper says.
Researchers wrote a program in Python language to input into the ConceptNet.
My favorite quote about AI: it's only artificial intelligence until you understand the algorithm. Then it's just a program. #stratahadoop
— Mark Madsen (@markmadsen) September 29, 2015
While the findings are impressive, improvements are needed before an AI could match a 5-year-old child.
“ConceptNet does well on Vocabulary and Similarities, middling on Information, and poorly on Word Reasoning and Comprehension,” the paper highlights.
This means that the AI system easily answers questions like “What is a cat?,” “Ball and apple are both___,” but struggles with abstract reasoning. “Why do people shake hands?” would be a more difficult question.
“Future work on the query interface, the knowledge base, and the inference routines will inevitably raise the question of where the boundary is to be drawn between natural language understanding on the one hand, and common sense reasoning on the other,” the paper concludes.