-
Do Risky Drinkers Think Differently? Insights From Cognitive Experiments
Podcast: Under the Cortex hosts Elizabeth Goldfarb (Yale University) to explore the cognitive profile of risky drinkers, as well as possible interventions for those struggling with alcohol use.
-
PSPI Live: Understanding the Stigma Associated With Substance Dependence
In an October 25 APS PSPI Live webinar, experts in the field discussed substance abuse and dependence from a nuanced perspective that goes beyond common misconceptions. A recording of the symposium is now available for registrants and APS members.
-
New Content From Perspectives on Psychological Science
A sample of research on digital contact tracing in pandemics, the interpersonal distance theory of autism, the impact of school closures on children’s mental health and learning, and much more.
-
Stigma Against People With OCD Varies With Their Obsessions
Individuals with OCD face stigma both for the nature of their intrusive thoughts and for their distress, according to a new study in Clinical Psychological Science.
-
How Lack of Independent Play Is Impacting Children’s Mental Health
JUANA SUMMERS, HOST: We've been hearing a lot about the mental health crisis among children. Researchers have looked at a number of reasons, from social media use to isolation during the pandemic. But a recent commentary published in the Journal of Pediatrics looked at another factor - the decline of independent activity and play for children. Peter Gray is the lead author of that piece. For years, he's been following the trend of declining mental health in kids and the declining levels of independent play. He joins us now. Welcome. PETER GRAY: I'm very happy to be here.
-
Humans Absorb Bias From AI—and Keep It After They Stop Using the Algorithm
Artificial intelligence programs, like the humans who develop and train them, are far from perfect. Whether it’s machine-learning software that analyzes medical images or a generative chatbot, such as ChatGPT, that holds a seemingly organic conversation, algorithm-based technology can make errors and even “hallucinate,” or provide inaccurate information. Perhaps more insidiously, AI can also display biases that get introduced through the massive data troves that these programs are trained on—and that are indetectable to many users. Now new research suggests human users may unconsciously absorb these automated biases.