Cross-Examining Our “Fixed Pie” Approach to Truth
If evidence from true crime documentaries tell us one thing, it’s that the science supporting — and calling into question — criminal convictions is developing at an incredible pace. Advances in DNA profiling, as well as a deeper understanding of the psychological factors that can contribute to false confessions, have helped overturn hundreds of wrongful convictions in the United States alone. And research in Psychological Science suggests that there may be another confounding factor at work in courtrooms worldwide: the zero-sum fallacy.
In game theory, a zero-sum situation is one in which one player experiences a gain that corresponds to an equivalent loss for another player. In the courtroom, says researcher Toby Pilditch, a professor of experimental psychology at the University College London, this same fallacy can lead jurors to assume that the explanation of events offered by the prosecution and the defense in a trial are mutually exclusive (that is, only one of the offered hypotheses can be true) and exhaustive (one of the offered hypothesis must be true).
When this “fixed pie” approach to thinking is applied to a trial, Pilditch continued, it can lead individuals to treat evidential support as a finite, shared resource, and causing them to view evidence that supports one hypothesis as inherently discounting the other, and evidence that supports both sides as irrelevant.
Determining the truth in a courtroom isn’t always that simple, though. In one trial, Pilditch explained, a man stood accused of firing a weapon on the basis of a single particle of firearm discharge residue (FDR) found in his coat pocket. His eventual conviction was later overturned when the defense was able to convince a judge that the discharge residue was equally likely to have made its way into his pocket through police mishandling of evidence — and therein lies the logical error, Pilditch and colleagues wrote.
“It is possible that he fired the gun, there was also poor police handling of the evidence, and also that neither were true (e.g., the FDR particle came from elsewhere),” the authors wrote. “Therefore, rather than being neutral, the FDR evidence may have been probative.”
Assuming that hypotheses are mutually exclusive and exhaustive substantially reduces the computational complexity of a problem, says Pilditch.
“One could argue that when facing an environment wherein multiple items of evidence are available, ignoring evidence with multiple explanations in favor of evidence that speaks to only one explanation may be adaptive,” he continued. “However, this ignorance is also costly in many real world cases.”
In the first stage of the study, Pilditch and colleagues sought to demonstrate the zero-sum fallacy by asking 49 participants to make determinations in four scenarios like this one: “Does a [positive/negative] Griess test result give any support to the claim that Ann [has/has not] handled explosives?”
Participants were informed that there was a 94% probability of the test coming back positive if Ann had handled explosives, but that she was equally likely to test positive if she had handled a deck of cards, which she claimed to have done.
As predicted, those who reviewed positive test results made the correct determination — that a positive test result supported both claims — on 40% of study trials; they reported that they were unable to make a determination in a similar proportion of trials. Those who saw negative test results, on the other hand, responded correctly over 80% of the time.
Those in the negative test condition may have been able to respond more accurately because they did not perceive incorrect hypotheses as competing with other explanations for a limited “sum” of correctness, Pilditch and colleagues wrote.
To pin down exactly why individuals in the positive condition responded incorrectly, the researchers conducted a follow-up study of 193 participants. This time, the researchers asked participants to provide confidence ratings for their responses. In one study scenario, respondents were presented with a case of alleged athlete doping; they were informed that the test had an equal chance of coming back positive whether the athlete in question had used illegal steroids or had unknowingly consumed a similar substance in a soy-based drink. Half of participants were also given a “nonexhaustiveness statement” informing them that it was possible for the athlete to have done neither of these things.
Reading the statement was found to only slightly improve the rate of correct responses in the positive condition, and participants who responded with “cannot tell” reported being just as confident in their responses as those who responded correctly.
This suggests that uncertainty about how to interpret the test results is not the reason why respondents refrained from making a determination. Rather, they refrained because they interpreted results that did not favor one explanation over another as irrelevant, Pilditch and colleagues wrote.
In a final study of 201 participants, the researchers found that providing respondents with the probability of a false-positive in a given scenario had no meaningful impact on responses. Informing participants that it was not possible for both hypotheses to be true, on the other hand, resulted in a small but observable increase in correct responses in the positive condition.
Taken together, these studies demonstrate how the zero-sum fallacy can hinder people’s ability to consider that one piece of evidence can simultaneously confirm competing hypotheses, an approach that only makes sense if you assume that theories are mutually exclusive and exhaustive, the authors wrote.
“In the contexts presented in these experiments, and in many real-world contexts such as law and medicine, these conditions do not hold, and yet people persist in disregarding evidence that is genuinely probative,” Pilditch and colleagues write.
While the zero-sum fallacy proved resistant to intervention in this study, further investigation may reveal other methods for mitigating this logical error in the courtroom and beyond.
Reference
Pilditch, T. D., Fenton, N., & Lagnado, D. (2018). The zero-sum fallacy in evidence evaluation. Psychological Science, 30, 250-260. doi:10.1177/0956797618818484
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.