Featured
Up-and-Coming Voices: Myths and Misinformation
Morally Unacceptable Actions Become Less Unacceptable Through Imaginative Moral Shifts • Math Misconceptions Abound When Adults Reason About COVID-19 Health Statistics • Testing the Benefits of True/False Testing Before and After Learning • People of All Ages Demonstrate a Limited Ability to Distinguish Between Original and Manipulated Images • Accuracy and Social Incentives Shape Belief in (Mis)Information • National Narcissism Is Associated With the Spread of Conspiracy Theories During Public Health Crises: Evidence From 54 Countries
As part of the 2021 APS Virtual Convention, researchers had the opportunity to connect with colleagues and present their work to the broader scientific community in 15-minute flash talks. This collection highlights students’ and early-career researchers’ work on misinformation and combating widely held misconceptions in psychological science and beyond.
Morally Unacceptable Actions Become Less Unacceptable Through Imaginative Moral Shifts
Beyza Tepe (Bahçeşehir University) and Ruth M. Byrne (Trinity College, Dublin)
What did the research reveal that you didn’t already know?
People change their minds about their moral evaluations of other people’s behavior, and our research shows they can do so very rapidly—within a matter of seconds, and without any additional facts—just based on their own imagination of potential alternative interpretations. We systematically examined how people update their intuitive judgments about events that violate moral norms. For instance, suppose a man on an airplane is about to take his seat in front of you. He then calls the flight attendant and says he does not want to sit next to a Muslim passenger seated in the row and that the passenger must be moved to another seat. Would you consider the situation to be morally acceptable or not? Most people judge it to be highly morally unacceptable. Do you think you might change your mind and consider it to be acceptable?
Often our moral judgments are deeply rooted in our beliefs about our own moral character—and in our beliefs that certain actions have an objective moral value—so we may be reluctant to alter our moral evaluations of them. However, when we asked people to imagine circumstances in which the action would have been moral, they were readily able to do so. For example, they imagined that perhaps the Muslim passenger had been rude, so the reason the other passenger asked for him to be moved wasn’t anything to do with the man’s religion, or that the passenger who made the request had detected the Muslim passenger behaving in a way that indicated he was about to harm him, so the other passenger asked for him to be moved to protect himself. Participants updated their moral judgments and considered that the action was not so morally unacceptable when they imagined such possibilities. They changed their evaluations even when they could spend just a few seconds thinking about alternative circumstances, and their moral judgments changed even more when they could deliberate for longer.
Our research also found that such imaginative shifts occur not only for immoral matters but also for irrational ones. For example, when people hear that a man on an airplane told the flight attendant he did not want to sit next to any passenger and asked her to move everyone in the row to somewhere else, they considered his action to be irrational. But when they imagined circumstances in which it could have been rational, they changed their minds. Our research suggests that our judgments are influenced not only by the facts as we know them but also by our imagination of alternative possibilities.
How might your research enhance our understanding of misinformation or, more directly, help us combat it?
We found that people, in a matter of a few seconds, can reinterpret a situation, even in the absence of further facts, based solely on their imagination of alternative circumstances. The role of imagination in helping people to reason is a powerful resource available to everyone, although it often remains untapped. Our findings are consistent with research that shows that people are capable of searching for counterexamples and generating counterarguments, but they tend not to do so spontaneously unless the situation prompts them. Our results suggest that one way to combat misinformation is to assist people with imagining alternative circumstances in their interpretation of an event. Such imagination aids could be provided through specific prompts within the information-provision environment or through general educational training.
Math Misconceptions Abound When Adults Reason About COVID-19 Health Statistics
Clarissa Thompson, Jennifer M. Taber, and Marta Mielicki (Kent State University), Pooja G. Sidney (University of Kentucky), and Percival Matthews (University of Wisconsin-Madison)
What did the research reveal that you didn’t already know?
In our recently published paper in the special issue on COVID-19 in the Journal of Experimental Psychology: Applied, we showed that one source of widespread early confusion about the severity of COVID-19 involved whole-number bias. Specifically, this pervasive math misconception led people to consider only the absolute number of deaths or absolute number of people infected by COVID-19, rather than the rate of deaths relative to the total number of people infected. We taught adults to calculate case-fatality rates via step-by-step instructions and number-line visualizations, which diminished their math misconceptions and improved their accuracy. Said another way, we demonstrated that an educational technique, adapted from interventions developed to improve children’s math understanding, could also help adults with essential health-related mathematical understanding. These findings have real-world implications during a global pandemic.
Apart from the research findings, working on this project solidified our belief in the value of interdisciplinary collaboration. Just days after our universities and our children’s schools were locked down in March 2020, we gathered experts in math cognition, education, social, health, and clinical psychology to launch an educational intervention followed by 10 days of experience sampling. Although time was of the essence, we upheld open science principles and carefully preregistered our data collection and analytic plans. The end result was several impactful papers.
How might your research enhance our understanding of misinformation or, more directly, help us combat it?
People of all ages and math skills demonstrate whole-number biases, which can cause mistaken understanding about health risks—in this case, the severity of COVID-19 relative to the flu. We combated this pernicious misconception by crafting an effective educational intervention, which improved adults’ health problem-solving accuracy, findings we have since replicated. Our brief intervention could be adapted for other health settings or future health crises in which people must reason about one health statistic relative to another.
Testing the Benefits of True/False Testing Before and After Learning
Kelsey K. James and Benjamin C. Storm (University of California, Santa Cruz)
What did the research reveal that you didn’t already know?
This research expanded our knowledge of the relationship between pretesting (testing before learning) and posttesting (testing after learning). Some prior research found greater benefits from posttesting than pretesting, and other research found pretesting to be a more beneficial strategy for improving final test performance. Our research adds a layer of nuance to this debate. We compared the benefits and detriments of pretesting versus posttesting with true/false tests and found that both pretesting and posttesting led to similar improvements to final test performance. However, posttesting led to a significantly higher rate of intrusions on a final cued-recall test (defined here as a reproduction of the false piece of information from the false items on the initial true/false test). This difference in intrusion rates between posttesting and pretesting held up across all three experiments despite the addition of feedback in Experiments 2 and 3.
How might your research enhance our understanding of misinformation or, more directly, help us combat it?
The main aspect of this research related to misinformation is the finding that not only the presence of feedback but the type of feedback may be critical in preventing learners from reproducing false information. We found a stark difference in our data when feedback was not given (Experiment 1) or when feedback was corrective (Experiment 2; e.g., “This answer was true/false”) versus when feedback was substantive (Experiment 3; e.g., “This answer was true/false because…”). With substantive feedback, overall intrusion rates of false information were around 3%, compared to 16% in Experiment 1 and 11% in Experiment 2. More research needs to be done to fully understand how and why substantive feedback might have this promising effect of reducing the likelihood of reproducing false information.
People of All Ages Demonstrate a Limited Ability to Distinguish Between Original and Manipulated Images
Sophie J. Nightingale (University of Lancaster), Kimberly A. Wade and Derrick Watson (University of Warwick)
What did the research reveal that you didn’t already know?
Our research revealed that, overall, adults had a limited ability to discriminate between original and manipulated images and that older adults were slightly less accurate in detecting and locating manipulations than younger and middle-aged adults. Viewing a warning video that explained some common types of image manipulation techniques only improved performance marginally.
We also found that people tended to be overconfident in their decisions, suggesting that people’s confidence reports are not reliable indicators of how accurate they are. We examined the strategies participants reported using to distinguish between genuine and manipulated images. The most commonly reported strategy—checking for lighting or shadow inconsistencies—was not associated with improved performance. Yet, a number of other, less frequently reported strategies, for example checking for photometric inconsistencies, were associated with improved performance. What’s more, older adults were more likely to report checking for lighting or shadow inconsistencies and less likely to report using the strategies associated with improved manipulation detection performance. Therefore, the strategies older adults used might partly account for their poorer performance on the task.
How might your research enhance our understanding of misinformation or, more directly, help us combat it?
Our research suggests that there is no simple way to combat people’s belief in, or sharing of, image-based misinformation. When it comes to images, people showed a limited ability in determining authenticity, and, in line with other research findings, warnings may have only a limited effect. Given that some strategies for detecting manipulations seemed to be more effective in improving people’s ability to detect image manipulations than others, it might be possible to develop more intensive training programs to help people decipher the real from the fake. That said, the landscape is changing at pace. With the arrival of highly realistic synthetic media, discriminating real from fake is becoming an increasingly difficult task for human perception—we have new research to show this. We think a priority for cognitive scientists at this point should be to work on determining the underpinning mechanisms that influence the detection of manipulations and to begin to examine the range of factors that might help or hinder performance. Finally, given people’s limited perceptual ability in sorting the real from the fake, it is crucial that platforms, especially social media companies, do more to protect people from misinformation.
See many other collections of videos and insights from up-and-coming researchers in the APS archive.
Accuracy and Social Incentives Shape Belief in (Mis)Information
Steve Rathje and Sander van der Linden (University of Cambridge), and Jay J. Van Bavel (New York University)
What did the research reveal that you didn’t already know?
Liberals and conservatives tend to be divided about what they consider to be true and false news. However, it is unclear whether this partisan divide is due to differences in knowledge or to a lack of motivation to be accurate. To help differentiate between these two explanations, we provided people with financial incentives to correctly identify true versus false headlines. When people were paid to be accurate, they became more accurate at identifying true versus false headlines, and the partisan divide in belief decreased considerably. This effect was mainly driven by people expressing greater belief in politically incongruent true news headlines (e.g., conservatives expressing more belief in headlines from The New York Times).
Many studies have also shown that conservatives tend to believe in and share more misinformation. However, when conservatives were motivated to be accurate, the gap in accuracy of beliefs between conservatives and liberals closed by about 60%. Thus, much of conservatives’ greater reported belief in misinformation may reflect different motivations rather than a simple lack of ability to discern truth from falsehoods.
How might your research enhance our understanding of misinformation or, more directly, help us combat it?
While a lot of efforts focus on teaching people skills to identify misinformation, our research demonstrates that it is very important for people to be motivated to be accurate. Additionally, one of our studies found that making people think about which news headlines would be liked by their in-group decreased the effect of the accuracy incentive and made people report greater intentions to share politically congruent true and false news.
In other words, social motivations to share content that will be liked by our in-groups—such as those present on social media—can distract from accuracy motivations. Thus, to combat misinformation, we may want to think of how we can shift the incentive structure (on social media and otherwise) to increase accuracy motivations and decrease motivations to share politically convenient falsehoods.
To read more about Rathje and colleagues’ study, view the preprint.
National Narcissism Is Associated With the Spread of Conspiracy Theories During Public Health Crises: Evidence From 54 Countries
Anni Sternisko and Jay J. Van Bavel (New York University), Aleksandra Cichocka (University of Kent), Aleksandra Cislak (SWPS University of Social Sciences and Humanities)
What did the research reveal that you didn’t already know?
When my collaborators and I started this project, I was confident that social identity—more specifically, national narcissism—played a powerful role in the spread of COVID-19 conspiracy theories. However, most research on conspiracy theories has been conducted in WEIRD (Western, educated, industrialized, rich, democratic) countries. In other words, the literature that informed our hypotheses was based on a tiny part of the world’s population.
Conspiracy theories are a complex phenomenon that emerge from motivational, cognitive, and contextual interplays. We examined 54 countries from six continents, so I was prepared to find some variation. When I saw how robust and consistent the relationship between national narcissism and belief in COVID-19 conspiracy theories was across nations, I was really impressed. National narcissism and COVID-19 conspiracy theories were positively related in almost every country, and this relationship was robust against adjusting for various confounds like knowledge about the pandemic, reflection, and even people’s general tendency to believe conspiracy theories. Despite the complexity of conspiracy theories, it seems that some of their psychological processes are quite universal and fundamental.
How might your research enhance our understanding of misinformation or, more directly, help us combat it?
One important message of our study is that conspiracy theories are “social.” They do not emerge and spread simply because people are uninformed or do not think critically. Our data was cross-sectional, but based on previous research, I am confident that national narcissism promotes people’s readiness to believe and disseminate COVID-19 conspiracy theories to buffer against social-image threats posed by the pandemic. If that is the case, people scoring high in national narcissism are particularly vulnerable to false narratives and deserve special attention when we design strategies to combat the spread of misinformation.
For instance, public health messages might benefit from stressing that adherence to health guidelines (e.g., COVID-19 vaccination) helps protect their nation’s image. Further, research suggests that bolstering people’s self-esteem and sense of control can decrease national narcissism. Our society can take concrete steps to do that. Ultimately, many intervention strategies are treating symptoms rather than causes of misinformation. I believe that the most fruitful way to combat our “infodemic” is to get to the (motivational) roots of the problem, one of which is national narcissism.
Feedback on this article? Email [email protected] or scroll down to comment.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.