- Sputnik International
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

Artificial Intelligence Machines Match IQ Test Results of 4-Year-Olds

© Flickr / Ars ElectronicaHuman Brain
Human Brain - Sputnik International
Some may be wary and some may celebrate, but no one can stop machines from developing intelligence. Fresh studies show AI has reached intellectual levels of 4-year-old humans in IQ tests. What’s next?

From the start, robots were outplaying humans in algebra. Recently they’ve begun to lead in chess and other games. They are good with face recognition, too. But what about comparing them to humans in more general terms?

Robot Ball exhibition - Sputnik International
Kalashnikovs of Tomorrow: Musk, Hawking, Wozniak Fear Killer Robot Armies
Psychometrics were designed by psychologists as a means to measure how intellectually developed a person is. Probably the most widely known psychometrics is the IQ test. A team of researchers with the University of Illinois at Chicago, led by Professor Stellan Ohlsson, tried the method on one of the most advanced computer systems.

The results they came up with are just astonishing. It appears that modern AI systems have reached the level of an average 4-year-old kid.

The verbal part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) was tested on the ConceptNet 4 AI system. The results were recently published at arXiv.org.

“We chose the WPPSI-III because we expected some of its subtests to highlight limitations of current AI systems, as opposed to some PAI [Psychometric Artificial Intelligence] work with other psychometric tests of verbal abilities that highlights the progress that AI systems have made over the decades,” the paper says.

Researchers wrote a program in Python language to input into the ConceptNet.

While the findings are impressive, improvements are needed before an AI could match a 5-year-old child.

“ConceptNet does well on Vocabulary and Similarities, middling on Information, and poorly on Word Reasoning and Comprehension,” the paper highlights.

This means that the AI system easily answers questions like “What is a cat?,” “Ball and apple are both___,” but struggles with abstract reasoning. “Why do people shake hands?” would be a more difficult question.

“Future work on the query interface, the knowledge base, and the inference routines will inevitably raise the question of where the boundary is to be drawn between natural language understanding on the one hand, and common sense reasoning on the other,” the paper concludes.

To participate in the discussion
log in or register
Заголовок открываемого материала