Crowding Out Falsehoods

Psychological scientists are harnessing the biases and expertise of imperfect individuals to enhance the wisdom of crowds.

Decorative image of silhouetted crowd
Quick Take

Creating cognitive models of crowds  Putting crowds in context  Learning from a crowd of two Promoting divergent opinions 

If the COVID-19 pandemic and ongoing political upheavals around the world have made anything clear, it’s that misinformation poses a real threat to the health and well-being of people everywhere. Inaccurate or intentionally misleading information can interfere with evidence-based public health initiatives and split partisan groups into political silos that seem to inhabit separate realities.  

Expert fact-checkers, chief among them journalists, commit their professional lives to verifying the accuracy of the information politicians, businesses, and media outlets share with the public. But what if the very individuals who are vulnerable to accepting misinformation could be a part of defending against it, too? 

An individual layperson’s response to misinformation may be more subject to personal biases than a professional fact-checker’s, but politically balanced crowds of laypeople can identify low-quality sources nearly as accurately as professionals, a team of researchers led by Cameron Martel (Massachusetts Institute of Technology) wrote in a recent article for Perspectives on Psychological Science.  

“The intuition underlying the wisdom of crowds is that in many contexts, the biases and errors that arise in nonexpert judgments will tend to cancel out when aggregated, giving rise to accurate aggregate judgments,” Martel and colleagues Jennifer Allen, David G. Rand (MIT), and Gordon Pennycook (Cornell University) wrote. “Relatively small layperson crowds can actually produce aggregate judgments that approximate those of professional fact-checkers.” 

Harnessing the wisdom of high-quality online crowds could help scale up efforts to identify and intervene on misinformation, Martel said in an interview. 

“Professional fact-checkers are a vital part of the content moderation environment but are limited in number, time, and resources,” he said, “and so figuring out scalable approaches to supplement this is really important. I think this is why crowdsourced fact-checking has potential to be another helpful tool in the toolbox of content moderation.”  

The aggregate judgments of crowds as small as 20 people correlate with those of professional fact-checkers as well as those of amateur investigators, the researchers wrote. Crowd-sourced fact-checking can be used to evaluate individual headlines, posts, and news articles, and these evaluations can themselves be aggregated to produce less-biased ratings of news sources as well. 

Related content: Collected research on Myths and Misinformation

Individual judgments can be highly partisan, but careful aggregation can help extract wisdom from a politically contentious media landscape, Martel and colleagues wrote. Although Democrats and Republicans are both more likely to flag content created by the other side as misleading, the symmetrical nature of this political polarization means that such biases essentially cancel each other out in politically balanced crowds, the researchers explained. 

“Politically balanced crowd ratings of source quality are highly similar to fact-checker ratings and can be robust to partisan differences in overall media trust,” they said. 

Political partisanship can also serve as a powerful motivation for people to voluntarily evaluate content online, Martel and colleagues added. However, using a wisdom-of-crowds approach to evaluate misinformation still carries the risk of promoting the “tyranny of the majority,” as a popular idea is not necessarily an accurate one. It’s also possible for political groups to coordinate attacks on content they disagree with, artificially tanking the reliability ratings. And people are more likely to evaluate content that has already received a lot of engagement, the researchers continue. 

Nonetheless, media companies could enhance the wisdom of online crowds to combat misinformation. 

“Social media platforms and other online digital spaces have long been bastions for consolidating collective intelligence,” Martel and colleagues wrote. “Platforms and practitioners should continue to empower their users and community bases and enable them to engage in fact-checking in order to allow for scalable action against misinformation.”  

Creating cognitive models of crowds 

Researchers have many options for harnessing the wisdom of crowds to tackle societal problems, cognitive psychologist Michael D. Lee (University of California, Irvine) wrote in an upcoming Current Directions in Psychological Science article. When researchers average participants’ evaluations, such as when estimating the height of a building, they are using a signal-to-noise approach to amplify the signal of common responses while filtering out the noise of uncommon estimates, Lee explained. The “jigsaw puzzle” approach, on the other hand, involves combining participants’ observations to create a more complete picture of an issue or event. 

Researchers could further enhance the wisdom of crowds by adding cognitive models of how humans report information. These models could augment statistical work on collective wisdom by further accounting for individuals’ expertise and biases, Lee wrote. 

This approach could help account for common cognitive biases, such as the human tendency to overestimate small probabilities and underestimate large ones, he explained. This way, people’s responses could be debiased before they are aggregated, allowing researchers to generate estimates and predictions that more accurately reflect the world as it is, rather than people’s perceptions of it. 

“Cognitive models can infer what unbiased knowledge produced the biased judgment and aggregate the inferred knowledge rather than the observed behavior,” Lee wrote. Other applications for cognitive models include: 

  • extracting shared knowledge based on similarities in how crowds rank lists of information, and 
  • using previous responses to project how crowds might respond to new stimuli. 

Putting crowds in context 

Just because a crowd has assembled doesn’t guarantee the presence of wisdom, said Stephen B. Broomell, a Purdue University psychologist who studies judgment and decision-making.  

Harnessing the wisdom of crowds could allow researchers to provide new insights into how to combat climate change, misinformation, and other crises, but the wisdom may be insufficient if the crowd is too small to take on the problem at hand, said Broomell, who wrote a Perspectives on Psychological Science paper on the topic with APS Fellow Clintin P. Davis-Stober (University of Missouri). 

“You need a crowd large enough to make sure it covers the entire problem, which is sometimes much bigger than you might have imagined if you don’t have any analysis of how big the problem is,” Broomell said. 

Local temperatures can bias people’s perceptions of climate change, for example, so tapping into the collective wisdom may require researchers to look beyond a single geographical region to create an international crowd, Broomell and Davis-Stober suggested. The same appears to be true of pandemics, in which the infection rate and number of fatalities in an individual’s own community may not reflect the severity at the global level. 

Overcoming these and other global/local incompatibilities will require researchers to merge previous work on the wisdom of crowds with findings specific to how individuals experience the problem they are seeking to solve, Broomell and Davis-Strober wrote. 

“By blending ideas about the wisdom of crowds with context-specific research, future researchers can better predict the public’s reaction to novel problems yet to come,” Broomell told the Observer. “The key is to more effectively harness the variability of people’s judgments toward solving societal problems that otherwise would have been impossible to solve. When small groups of individuals create a single solution to a problem that requires many solutions, you can only harness the wisdom of crowds by knowing how to find and combine these isolated insights.” 

Learning from a crowd of two 

When a problem occurs on the individual level, a “crowd” of just two experts could be enough to reach a solution, cognitive psychologists Jennifer E. Corbett (MIT) and Jaap Munneke (Northeastern University) wrote in a Psychological Science article

It’s not unheard of for even the most experienced clinician to miss infrequent signs of illness, Corbett and Munneke explained, but combining practitioners’ expertise can significantly improve their diagnostic accuracy. This form of independent collaboration could also be applied in security settings, where it could be used to enhance the accuracy of baggage scans. 

“Even experts routinely miss infrequent targets, such as weapons in baggage scans or tumors in mammograms, because the visual system is not equipped to notice the unusual,” wrote Corbett and Munneke. Instead of trying to enhance the accuracy of individual experts, the researchers suggest that they could boost their collective accuracy by using independent collaboration to fill in each other’s blind spots without interacting directly. 

“One person could be sitting in a radiology lab or airport looking at an image and halfway around the world at the same time or at a different time, another person could be shown the same images without ever meeting or knowing about the first person (and vice versa),” Corbett explained in an interview. “You would be more likely to get a larger improvement in detection from combining their estimates versus two people sitting beside each other looking together.”  

In a pair of experiments, Corbett and Munneke tasked 34 participants with identifying uncommon targets in images of mammograms or baggage scans. In both cases, averaging participants’ responses increased accuracy by 10% to 22% by decreasing misses and false alarms. The benefits of independent collaboration were even more pronounced, however, when participants with the most dissimilar performance on a basic visual task were paired together. 

“The more independent people’s estimates are, the more benefit you get from combining them,” Corbett said. “It’s all about being ‘different,’ not right or wrong.” 

Promoting divergent opinions 

Clearly, intellectual diversity is one source of a crowd’s wisdom—and research suggests that artificially increasing a large group’s diversity could pay dividends. While this approach may undermine individual accuracy, it ultimately enhances the accuracy of the crowd, Joaquin Navajas (Universidad Torcuato Di Tella) and colleagues explained in an upcoming Psychological Science article. 

“Intentionally introducing a range of different opinions can improve the accuracy of group decisions,” Navajas told the Observer. “This can be done by deliberately providing varied starting points or ‘anchors’ for people’s estimates.” 

Navajas and colleagues tested this anchoring effect through a set of four studies. In three of the studies, they asked participants to answer general-knowledge questions based on their location, such as “How many bridges are there in Paris?” In all three studies, participants in the control group provided a blind estimate. Those in the anchoring groups answered a question that presented them with a low or high anchor. In the low-anchor condition, they were asked, “Is the number of bridges in Paris higher or lower than 10?” In the high-anchor condition, they were asked if the number of bridges exceeded or fell below 349. The participants then provided their own estimate. 

In two of the studies, the researchers provided these anchor points. The other two studies used anchors based on responses provided by the participants themselves. Regardless of the source of the anchors, however, the results were the same: When participants were presented with an anchor, their responses became less accurate than those in the control group, but when all participants’ responses were aggregated, the anchored crowds’ collective accuracy was higher than that of the control crowd. 

“When we averaged all their estimates, the results were more accurate than if we had simply asked them without any anchors,” Navajas said. “In other words, we found that deliberately giving the crowd two ‘wrong’ answers (one very low, the other one very high) made people more erroneous at the individual level but more accurate at the collective level.”  

In their fourth study, conducted with 620 Americans from July to August of 2020, Navajas and colleagues also found that the anchoring effect increased participants’ collective accuracy when it came to forecasting the number of COVID-19 cases and deaths in the next week. This time, the anchors were set at two orders of magnitude more or less than the cases and deaths reported 2 weeks before the study began. 

“Deliberately anchoring people to very low or high numbers for COVID-19 deaths and cases led to aggregated forecasts that were more accurate than those produced by a crowd that remained unanchored,” Navajas explained. 

Organizations could leverage this anchoring effect to make better-informed decisions by including a wide variety of viewpoints in their decision-making processes, even if they are individually wrong, Navajas said. This could allow for more effective crowd-sourced problem solving across the fields of business, policymaking, and finance. 

Related content we think you’ll enjoy


Back to top

Feedback on this article? Email [email protected] or login to comment.

References

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.