Human Insights for Machine Smarts
Cognitive and developmental scientists are helping AI think more like us
- Cognitive and development psychologists are helping AI developers create systems that work more like the human mind.
- Research labs are working on algorithms that can reason, self-correct, and understand emotion.
- Psychologists cite a need for artificial agents that can better mimic, and improve on, the cognitive steps humans make when handling uncertainty.
Shared history • Learning like children • Artificial decision-making and empathy • Next steps: AI that can pivot
Machines can now beat us at chess, create art, and even diagnose diseases. Yet, for all its capabilities, artificial intelligence (AI) is not artificial humanity. It lacks our capacity for imagination, critical thinking, and emotional intelligence.
Today’s AI is best suited for rote tasks and data-driven decisions. But many experts say psychological science is the key to unlocking AI’s full potential. Cognitive and developmental researchers are helping developers create AI systems that can explore, extrapolate, and understand with the speed and agility of the human mind. They envision algorithms that can engage in common-sense reasoning, self-correction, and even empathy.
AI Terms To Know
Large language models (LLMs)—AI systems capable of understanding and generating human language by processing vast amounts of data. Source: IBM
Machine learning—a branch of computer science that focuses on computers that can detect patterns in massive datasets and make predictions based on what they learn from those patterns. Source: U.S. Department of Energy
Deep neural networks (DNNs)—multilayered networks that simulate the complex decision-making power of the human brain. Source: IBM
“Humans are still the most efficient learners for many tasks,” said APS Past Board Member Tania Lombrozo, codirector of the Natural and Artificial Minds initiative at Princeton University. “Understanding how humans learn with such limited data is going to be really valuable, not just for understanding human cognition better, but also for building better AI systems.”
Shared history
Cognitive scientists were on the ground floor of AI research. In the 1970s and 80s, psychologists Jay McClelland and David Rumelhart created computer models that simulated human perception, memory, language comprehension, and other cognitive tasks. Computer and cognitive scientist Geoffrey Hinton, who has training in empirical psychology, is among the recipients of the 2024 Nobel Prize in Physics for his pioneering work in machine learning.
But over the years, AI research and cognitive science have taken divergent paths, Lombrozo said.
“If you look at the last 10 years, AI has had this sudden explosion and not quite succeeded in maintaining its close connections to cognitive science,” she asserted.
Lombrozo believes initiatives like Princeton’s AI lab can foster the interdisciplinary collaborations that can advance machine learning. And she and her colleagues are already identifying some AI features to build on.
In a review published in 2024, she showed that AI can sometimes correct itself and alter its conclusions without new external input—a phenomenon called learning by thinking. Like humans, AI has shown it can learn not only through observation but through step-by-step thinking, self-explanation, and analogical reasoning. Large language models (LLMs) can be asked to elaborate on an answer, then correct and refine their original responses on the basis of this elaboration. When LLMs are asked to draw analogies, rather than providing a simple answer to a question, their accuracy sometimes improves, she noted (Lombrozo, 2024).
But learning by thinking does not always lead to “learning” in the sense of reaching accurate conclusions, Lombrozo said. She and colleagues are also identifying instances in which LLMs, when engaging in step-by-step thinking, actually err in manners similar to humans.
“Sometimes when you engage in a process like explicit verbal reasoning, you think you’ve achieved some new understanding, but you’re actually wrong,” she said. “An inference you drew could be incorrect. That’s true in the case of humans and it’s true in the case of AI.”
In an unpublished study, Lombrozo and other researchers adapted six tasks widely used in psychological research and assigned them to LLMs. Of these, they identified three tasks where prompting the LLMs to generate a chain of thought hindered performance. In one experiment, for example, they used a face-recognition task in which study participants are first shown a face of a person and then asked to select a picture of that same person from a group of faces. Prior research shows that people who are asked to verbally describe a face perform worse when they later try to pick it out from a group of faces.
For their study, Lombrozo and colleagues in some cases gave the LLM a zero-shot prompt, which directly instructs the model to answer the question, without additional guidance or examples. In other cases, they employed a chain-of-thought (CoT) prompt, in which the model is directed to describe the steps it took during the exercise. For the face-recognition task, the model performed worse when using CoT prompting as opposed to zero-shot prompting. Conversely, the experiments showed that models sometimes performed better with CoT prompts than zero-shot prompts in a logical-reasoning exercise.
This study represents a first step in studying when AI systems show the same performance constraints as humans, and when they work differently from human cognition.
Learning like children
Scientists are testing models that can learn a new word or concept and apply it to other contexts, much like a human child does. A child could learn to skip, and from there understand how to skip backwards or around an object.
Psychology and data science professor Brenden M. Lake is among the researchers pursuing this technology. Lake is codirector of New York University’s Minds, Brains, and Machines initiative, a campus-wide effort focused on how findings about natural intelligence can lead to better AI systems, and how AI can promote better understanding of human intelligence.
Making AI Explain Itself
Among the biggest concerns about machine-learning techniques are the so-called “black box” results they produce. Artificial intelligence can predict outcomes, but struggle to explain their reasoning in ways that humans can easily understand.
But some scientists are trying to soften that mystery by advancing a field called explainable artificial intelligence (XAI), an initiative to increase the interpretability, fairness, and transparency of machine learning.
Driving the XAI movement are a variety of questions about algorithmic responses. AI skeptics worry about AI systems being trained on data that reflect human biases. Users often question how algorithms make their decisions, leaving them wary of AI solutions—especially for critical applications such as medical diagnoses and autonomous vehicles.
Governments are trying to address these concerns as AI expands. U.K. policymakers published a white paper that, among other recommendations, calls for transparency, explainability, and accountability in AI. As part of the European Union’s new AI Act, automated-system creators must inform users about data sources, algorithms, and decision-making processes in their products. The White House is developing AI regulations that include transparency and accountability standards.
But the complexity of AI algorithms can make it challenging to provide explanations that are satisfyingly interpretable and complete in their description the model’s machinations, said cognitive psychologist J. Eric T. Taylor, a product manager at Canadian AI research and development company Borealis AI.
“The motivation of that legislation is to give people transparency surrounding how algorithmic decisions are made,” Taylor told the Observer. “But transparency and interpretability can be at odds when the technology is so complex.”
Taylor is among a variety of researchers addressing the black-box issue. The human mind, he said, is itself a black box that cognitive scientists have studied for more than 150 years.
Experimental psychologists have generated robust models of perception, language, and decision making by drawing inferences about the structure and functions of the human mind, he said.
“The whole idea of artificial cognition is that we can apply cognitive science to AI models,” Taylor said. “We can curate a set of stimuli designed to test a hypothesis, give it to the machine, and make a causal inference based on how the machine behaves and how it treats the stimuli.”
In a 2021 paper, Taylor and coauthor Graham Taylor, who leads the Machine Learning Research Group at the University of Guelph, suggested a research pipeline that identifies a behavior and its environmental correlates, infers its cause, and identifies the conditions that trigger a behavior change—all mimicking the art of research in psychology labs and science in general. Researchers can also expose the machine to controlled circumstances designed to rule out alternative explanations for a behavior, they wrote (Taylor & Taylor, 2021).
Taylor and his colleagues recently demonstrated this approach by measuring a neural network’s speed in responding to a variety of tasks that involve image classification and pattern recognition. That study illustrated how measuring something as simple as response time could help explain its reliability when handling uncertainty or ambiguity (Taylor et al., 2021). In autonomous vehicles, for example, checking the system’s response time could identify whether it’s processing information efficiently or encountering delays that could be critical to safety.
Taylor also sees a need for more research on the intelligibility of the explanations that the AI systems produce.
“XAI techniques were designed to offer explanations so that people could understand and develop a concept for how machines think or behave,” he said. “And yet most papers lack an empirical study of whether or not the explanation is effective at conveying that understanding to people. It’s a huge blind spot.”
In his lab, Lake is working on models that mimic the human ability to learn new concepts from just a few examples. In one collaboration, he and linguistics researcher Marco Baroni of Pompeu Fabra University in Spain created a technique called meta-learning for compositionality (MLC), a neural network that is continuously updated to improve its skills over a series of simple data inputs. MLC receives a new word and is asked to use it compositionally—for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around.” MLC then receives another new word, and so on, each time improving the network’s compositional skills.
Lake and Baroni conducted a series of experiments with human participants that were identical to the tasks performed by MLC. The researchers also made up and defined some words (e.g., zup and dax) and had the participants and the AI system learn the meanings and apply them in different ways.
Learn more in this Science for Society webinar: AI Buzz: What’s Not New?
MLC performed as well as, and in some cases better than, the human participants. MLC also out‐performed the large language models ChatGPT and GPT-4, which have difficulty with this compositional generalization (Lake & Baroni, 2023).
“We’re really just trying to train language models in a way that is more like a child learning a new word,” Lake explained in an interview. “You introduce a new concept, give the system a few sentences about it, and then generate new sentences that use that concept.
“We’re studying whether this allows you to make more efficient use of the data instead of constantly trying to stretch the generalization of the system,” he continued.
In another experiment, Lake and his colleagues trained a neural network using video recorded on a headcam that a child wore beginning at 6 months of age through their second birthday. They found that the AI system could learn a considerable number of words and concepts using the sights and sounds that the child experienced (Vong et al., 2024).
Developmental psychology is the basis of novel AI models in other labs. Psychologists and computer scientists at the Pennsylvania State University, for example, have developed a way to train AI systems to identify objects and navigate their surroundings more efficiently—and similarly to babies’ visual learning experience. They found that their model is more energy efficient and outperforms base models by nearly 15%. The approach shows promise for a number of applications, such as autonomous robots that can learn to navigate unfamiliar environments, the scientists reported (Zhu et al., 2024).
Artificial decision-making and empathy
Psychological scientists are also in the forefront of creating AI systems that can match human decision-making processes. AI scientist Farshad Rafiei and colleagues in the Georgia Institute of Technology lab of cognitive psychologist Dobromir Rahnev have developed a neural network that imitates our ability to gather evidence and weigh options to reach a conclusion.
The researchers trained the network, called RTNet, to perform a perceptual decision-making task. Through trial and error, the network accumulated information before reaching a final decision. The scientists assigned the same task to 60 students. They found both the students and the network generated similar accuracy rates and response times (Rafiei et al., 2024).
Meanwhile, an international team of researchers have developed a model that uses mathematical psychology principles to predict and interpret human emotions. The scientists, hailing from Finland and the Netherlands, introduced a computational cognitive model that simulates emotion during interactive episodes.
Across two experiments, online participants rated their emotions. The participants then carried out tasks designed to elicit three targeted emotions: happiness, boredom, and irritation. They demonstrated that the model could predict the users’ emotional response. The scientists are now exploring potential applications for this emotional understanding and say their work could lead to AI systems that are more intuitive and responsive to user needs (Zhang et al., 2024).
Next steps: AI that can pivot
Though the initial goal of AI was to replicate human behavior, the technology still lacks the flexibility we possess when faced with decisions in dynamic environments. In a recent paper published in Perspectives on Psychological Science, Cleotilde Gonzalez of Carnegie Mellon University proposed the need for artificial agents that better mimic, and improve on, the cognitive steps we make when handling unexpected or changing circumstances (Gonzalez, 2024).
Related Content: The 2023 January/February Observer: How Machine Learning is Transforming Psychological Science
APS Fellow Gerd Gigerenzer, an expert on decision-making, says complex algorithms can best humans in well-defined, stable situations, such as a game of chess. But they show no superiority in tackling ill-defined problems and unexpected events, such as the spread of a new infectious disease.
“The human mind evolved to survive in a world of uncertainty,” he said in an interview. “Current AI, such as deep neural networks, have problems with uncertainty, intractability.”
Gigerenzer, director of the Harding Center for Risk Literacy at the Max Planck Institute for Human Development, is among many scientists advocating the use of psychological principles in the development of AI algorithms. Gigerenzer refers to this as psychological AI.
Many AI algorithms operate on abstract mathematical principles that don’t account for the nuances of human thought, Gigerenzer said. To deal with uncertain and quickly changing situations, humans rely on a set of heuristics to adapt, he explained.
He cited the recency heuristic—our tendency to base decisions on the most recent information—as an example of the human responses that AI systems often overlook. The recency heuristic is a simple algorithm that relies on a single data point yet can be surprisingly effective, he explained. For instance, research by Gigerenzer and his colleagues demonstrated that the recency heuristic consistently outperformed Google’s big-data algorithms in predicting flu spread.
“In a stable world, relying on the most recent events only and ignoring the base rates (the data of the past) might indeed be an error,” he wrote in a recent article for Perspectives on Psychological Science. “But in an unstable world, in which unexpected events happen, relying on recency may well lead to better decisions” (Gigerenzer, 2023).
By incorporating insights from psychology, developers can create more intuitive and effective AI systems, Gigerenzer said. His article outlined key psychological concepts that could enhance AI design, including intuitive judgment, emotion, and fast-and-frugal heuristics.
AI systems, he suggested, should mimic these adaptive decision-making strategies, leading to more efficient and user-friendly outcomes.
Related content we think you’ll enjoy
-
How Machine Learning Is Transforming Psychological Science
Artificial intelligence and machine learning are providing insights that will soon transcend scientists’ observational capabilities, potentially leading to revolutionary advances in understanding human psychology.
-
There’s No Ghost in the Machine: How AI Changes Our Views of Ourselves
Teaching: Try these classroom activities to clarify the myths and realities of artificial-intelligence capabilities.
-
Artificial Intelligence (AI) and Machine Learning
Developments in AI and machine learning herald unprecedented leaps in many applications, including human psychology itself. Algorithmic bias is just one issue of concern.
Feedback on this article? Email [email protected] or login to comment.
Gigerenzer, G. (2023). Psychological AI: Designing algorithms informed by human psychology. Perspectives on Psychological Science, 19(5), 839–848. https://doi.org/10.1177/17456916231180597
Gonzalez, C. (2024). Building human-like artificial agents: A general cognitive algorithm for emulating human decision-making in dynamic environments. Perspectives on Psychological Science, 19(5), 860–873. https://doi.org/10.1177/17456916231196766
Lake, B. M., & Baroni, M. (2023). Human-like systematic generalization through a meta-learning neural network. Nature, 623(7985), 115–121. https://doi.org/10.1038/s41586-023-06668-3
Lombrozo, T. (2024), Learning by thinking in natural and artificial minds. Trends in Cognitive Sciences, 28(11), 1011–1022, https://doi.org/10.1016/j.tics.2024.07.007
Rafiei, F., Shekhar, M., & Rahnev, D. (2024). The neural network RTNet exhibits the signatures of human perceptual decision-making. Nature Human Behaviour, 8(9), 1752–1770. https://doi.org/10.1038/s41562-024-01914-8
Taylor, J. E. T., Shekhar, S., & Taylor, G. W. (2021). Neural response time analysis: Explainable artificial intelligence using only a stopwatch. Applied AI Letters, 2(4), e48. https://doi.org/10.1002/ail2.48
Taylor, J. E. T., & Taylor, G. W. (2021) Artificial cognition: How experimental psychology can help generate explainable artificial intelligence. Psychonomic Bulletin & Review, 28, 454–475. https://doi.org/10.3758/s13423-020-01825-5
Vong, W. K., Wang, W., Orhan, A. E., & Lake, B. M. (2024). Grounded language acquisition through the eyes and ears of a single child. Science, 383(6682). https://doi.org/10.1126/science.adi1374
Zhang, J. E., Hilpert, B., Broekens, J., Jokinen, J. P. P. (2024). Simulating emotions with an integrated computational model of appraisal and reinforcement learning. CHI’24: Proceedings of the CHI Conference on Human Factors in Computing Systems, Article 703. http://doi.org/10.1145/3613904.3641908
Zhu, L., Wang, J. Z., Lee, W., Wyble, B. (2024). Incorporating simulated spatial context information improves the effectiveness of contrastive learning models. Patterns, 5, Article 100964. https://doi.org/10.1016/j.patter.2024.100964
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.