Artificial Intelligence
AI Gains Social Intelligence; Infers Goals and Failed Plans
A machine learning algorithm inspired by psychology and cognitive science.
Posted January 22, 2021 Reviewed by Devon Frye
Have you heard the familiar saying, “Even the best-laid plans go awry?” It is the English version of “The best-laid schemes o' mice an' men gang aft agley” from the 1785 Scottish poem “To A Mouse, On Turning Her Up In Her Nest With The Plough,” by Robert Burns.
Artificial neural networks are able to make predictions with a high degree of accuracy when the goal is known, thus many of the algorithms are given the goal. What happens in artificial intelligence (AI) machine learning if the goal is not given or when plans fail? In a recent study, artificial intelligence researchers from the Massachusetts Institute of Technology (MIT) showed a new machine learning algorithm with an increased ability to understand human goals.
“By performing this research at the intersection of cognitive science and AI, we hope to lay some of the conceptual and technical groundwork that may be necessary to understand our boundedly-rational behavior,” wrote the MIT researchers.
The Cambridge Dictionary defines inference as “a guess that you make or an opinion that you form based on the information that you have.” The study of how people make inferences spans many disciplines such as cognitive psychology, artificial intelligence, logic, math, statistics, philosophy, and other disciplines.
The MIT research team includes Tan Zhi-Xuan, the lead author of the paper, along with Jordyn L. Mann, Tom Silver, Joshua B. Tenenbaum, and Vikash K. Mansinghka. Together they conducted the study that was presented at last month’s 34th Conference on Neural Information Processing Systems (NeurIPS).
“People routinely infer the goals of others by observing their actions over time,” wrote the researchers. “Remarkably, we can do so even when those actions lead to failure, enabling us to assist others when we detect that they might not achieve their goals. How might we endow machines with similar capabilities?”
For example, picture a scenario where a toddler, upon watching an adult with both arms full of books repeatedly bump into a cabinet with closed doors and utter unintelligible sounds of bewilderment. After observing the adult, the toddler, acting altruistically, decides to help and walks over to the cabinet to open it for the adult. The toddler was not told beforehand that the adult’s goal was to put books inside a cabinet. Yet somehow the toddler was smart enough to figure out what the adult intended to do.
This scenario was part of an actual test performed by psychologists Felix Warneken and Michael Tomasello at the Max Planck Institute for Evolutionary Anthropology. Psychologists Warneken and Tomasello observed that children as young as 18 months old were able to guess what adults who failed a task intended to do, according to their study published in Science in 2006. The toddlers demonstrated their understanding by helping the adult complete the intended task such as picking up a dropped pen, opening doors to a cabinet, and other scenarios. In other words, without being explicitly told, the toddlers could infer what the adult’s goal was, even when the adult failed to achieve it.
When comparing artificial intelligence with human intelligence, a toddler can be considered more intelligent for many reasons, including having the innate ability to generalize concepts based on limited knowledge or training. For example, a child can figure out that a spherical toy that bounces is a ball without having to learn every type of ball ever manufactured. On the other hand, deep neural networks lack common sense, requires massive amounts of data for training.
“While there has been considerable work on inferring the goals and desires of agents, much of this work has assumed that agents act optimally to achieve their goals,” wrote the researchers. “Even when this assumption is relaxed, the forms of sub-optimality considered are often highly simplified.” In other words, most of the work on enabling machine learning to infer goals assumes that things go according to plan, and the algorithms reflect this assumption.
What sets this study apart is that this new machine-learning algorithm accounts for when things do and do not go according to plan, which is a more nuanced approach for more robust artificial intelligence.
“Our architecture models agents as boundedly-rational planners that interleave search with execution by replanning, thereby accounting for sub-optimal behavior,” wrote the team. “These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent’s goals and internal planning processes.”
For inference, the researchers developed a sequential Monte Carlo algorithm called Sequential Inverse Plan Search (SIPS). The researchers constructed their architecture using Gen, a general-purpose probabilistic programming system that was developed at MIT.
“We present experiments showing that this modeling and inference architecture outperforms Bayesian inverse reinforcement learning baselines, accurately inferring goals from both optimal and non-optimal trajectories involving failure and back-tracking, while generalizing across domains with compositional structure and sparse rewards,” the researchers reported.
Human intelligence, neuroscience, and biological cognition have served as the inspirational springboard for AI machine learning. Artificial intelligence has emerged from a period of relative dormancy to become a tool of choice for making predictions, pattern-recognition, computer vision, speech recognition, and more purposes mainly due to deep learning algorithms. Ironically, by adopting the strategy of reverse-engineering the natural cognitive capabilities of human toddlers, artificial intelligence scientists are making progress towards more flexible, robust, and capable machine learning.
Copyright © 2021 Cami Rosso All rights reserved.