AI’s Limits, Potential for Psychological Research and Practice
Science for Society is an APS webinar series focused on educating the public and bringing psychological science to decision-makers working to solve real world problems. In addition to psychological scientists, participants include public policy decision-makers, news reporters, advocates, and scholars from adjacent fields.
Many people are turning to artificial intelligence (AI) for help with their personal struggles. But those AI models generally lack a key trait of a human therapist: empathy.
Developmental psychologist David S. Yeager pointed to that deficit during a January 17 APS webinar, “AI Buzz: What’s Not New?” Research on AI tools such as large language models (LLMs)—which are trained to recognize, predict, and respond to textual and vocal input—focuses mainly on the functionality of the features and not their impact on the humans who use them, he said.
“That’s a problem. If you ask it to give advice to someone who’s depressed, is it giving good advice? What we found is the answer is often no,” said Yeager, a University of Texas at Austin scientist who helped Google evaluate some of the company’s AI models before their public release.
Joining Yeager on the webinar, part of APS’ Science for Society series, was Marjolein Fokkema, who studies machine learning and psychological assessment at Leiden University, and Dora Demszky, an education data scientist at Stanford University.
In his evaluation for Google, Yeager and colleagues would ask the LLMs to respond to sample crisis helpline texts.
A message, for example, would state “I’m really scared,” he recounted. “And the large language model would say things like, ‘No you’re not.’ It’s not empathic in any way.”
Although LLMs have shown promise in predicting an individual’s risk for suicide, most of the applications that people are now using remain seriously limited in their ability to truly help human beings, Yeager said. He suggested psychological scientists take a major role in helping technologists develop best-in-class data for training LLMs. AI models are showing some promising applications, said Demszky, who has collaborated with Yeager on studying the human behaviors that AI can—and cannot—mimic. She and her colleagues, for example, are devising LLMs that provide teachers with feedback on their interactions with students. Using some findings from studies on growth mindset, the scientists aim to help teachers foster the learning and well-being of their pupils.
Fokkema discussed the ability of machine learning to predict psychological outcomes. She cited studies showing that many machine-learning methods provide no more than a marginal improvement in traditional research methods. She also described her work developing algorithms that can predict mental health problems and other psychological outcomes.
A recording of the webinar is available below for registrants and APS members.
The full webinar is available to APS members and registered attendees.
If you or someone you love is struggling with thoughts of suicide, help is available. Call or text 988 to connect with the 988 Suicide and Crisis Lifeline.
Feedback on this article? Email [email protected] or login to comment.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.