Making Sense of Moral Hypocrisy

Everyone wants to believe they have an unshakeable moral compass, but our perception of morality is often guided by thoughts and theories that reinforce existing biases.

Illustration of a person with a compass over their head.
Quick Take

Imagining what might have been  Intuitive theories fuel our explicit beliefs

National elections put a country’s moral compass to the test, tasking its people to elect a leader that most closely reflects their beliefs about civil liberties, economic policy, and countless other complex moral issues.  

People are often more willing to make exceptions when considering the morality of their own actions than those of other people. This moral hypocrisy also appears to extend to how we perceive the morality of ingroups and outgroups, including political parties. Claire E. Robertson, a morality researcher at New York University, and her colleagues Madison Akles and APS Fellow Jay J. Van Bavel explore this phenomenon in a 2024 Psychological Science article

The researchers began examining the effects of group-level moral hypocrisy through a direct replication of a 2007 study by Piercarlo Valdesolo and APS Fellow David DeSteno (Northeastern University). Though the original in-person study involved 76 participants, Robertson and colleagues were able to connect with 610 participants by moving the study online using Prolific. 

Modeling the previous study, Robertson and colleagues tasked participants with estimating how many dots appeared on a screen before randomly assigning them to groups with meaningless labels of “overestimator” or “underestimator.” Groups of four participants containing people from both of these minimal—or made-up—groups were then placed in an online chat room. Participants next discussed their thoughts on belonging to each group before taking a short survey about how strongly they valued membership in their new group. 

At this point, participants were informed that either they or another participant would assign them one of two tasks: an easy 8-minute spot-the-difference image test or 20 minutes of difficult logic problems.  

Contrary to previous results, when the researchers excluded data from “altruists,” defined as those who willingly took on the more difficult task instead of assigning it to someone else, Robertson and colleagues found that participants generally perceived outgroup members as assigning tasks just as fairly as themselves and their ingroup members. The only participants who demonstrated ingroup favoritism were those who reported highly identifying with their assigned group. 

“This is consistent with previous work showing that the minimal group effect is typically driven by ingroup favoritism rather than outgroup derogation,” the researchers wrote. 

In a second experiment with U.S. citizens that also replicated the original study, the researchers employed the same task-assignment method, but this time the 606 participants were divided according to political party membership, with a roughly 50/50 split of Democrats and Republicans. This time around, participants perceived the task-assignment choices made by political outgroup members to be less fair than choices made by members of their political ingroup. This skewed perception was not found to differ significantly with the strength of participants’ political affiliation. 

“Ingroup bias is typically driven by outgroup derogation when groups fight over zero-sum resources, such as electoral power, and engage in moral conflict, such as arguing over partisan ideological beliefs,” Robertson and colleagues wrote. “This may be why partisan conflicts are often rife with moral hypocrisy.” 

When data from both experiments were combined, the researchers found that participants generally perceived ingroup members to have acted more fairly than outgroup members. These findings highlight the existence of moral double standards for ingroups and outgroups and demonstrate how our knowledge of a person’s political affiliation can influence our perception of unrelated behavior. 

“In our experiment, people were demonstrating outgroup animosity toward political outgroup members in a nonpolitical context, demonstrating how affective polarization and partisan sectarianism can bleed into nonpolitical domains and bias perceptions of political outgroup members’ general character,” the researchers explained (Robertson et al., 2024). 

Imagining what might have been 

One method people may use to excuse this kind of moral double standard is motivated counterfactual thinking. 

Counterfactual thinking occurs when people compare the actual outcome of a situation with an imaginary alternative outcome, known as a counterfactual, in which things went better or worse than the reality. This allows a person to justify their existing moral beliefs, explain social psychologist Daniel A. Effron (London Business School) and colleagues Kai Epstude (University of Groningen) and Neal J. Roese (Northwestern University) in an article for Current Directions in Psychological Science

Does AI Possess Moral Agency?

Until recently, people only needed to concern themselves with the morality of the actions taken by other human beings, but the rapidly advancing development of artificial intelligence (AI) has called this certainty into question. Now, researchers and laypeople alike must consider the morality of actions taken by and toward AI, wrote Ali Ladak, a social psychologist at the University of Edinburgh, and colleagues Steve Loughnan and Matti Wilks in a Current Directions in Psychological Science article

The current body of research suggests that people use many of the same mental processes to judge the moral agency both of other humans and of AI, although we tend to reach different conclusions, Ladak, Loughnan, and Wilks wrote. 

Research suggests that people often perceive AI as having a slightly higher moral agency than a chimpanzee and about the same amount as a young child. People also perceive AI entities with human qualities such as a voice, face, or name as having a slightly higher sense of moral agency than less anthropomorphic AIs. Overall, however, people would prefer AI not be involved in weighty life-or-death situations such as medical decisions, even when they have been shown to handle these choices more effectively than human beings. 

“One explanation for why AIs are still blamed less than humans is that they are perceived to lack some mental capacities that are required for human-level moral agency,” Ladak and colleagues wrote. “This suggests that for AIs to be attributed human-level moral agency, they must have, in addition to the agentic mental capacities emphasized by existing theory, experiential mental capacities reflecting the capacity to sense and feel.” 

By contrast, research suggests that people attribute almost no moral consideration— 
also known as moral patiency—to AI, meaning that they feel little concern for the well-being of AI. In one study, participants rated AI as having the same capacity for fear and pain as a dead person and, in another, rated less concern for AI well-being than that of chickens, trees, or murderers. 

“The simplest explanation is that AIs are typically designed this way: They calculate, decide, and act without any feeling or emotion,” Ladak and colleagues explained. “However, as AIs become increasingly human-like and expressive, like the latest chatbots, people may increasingly perceive them as experiential and, in turn, attribute them moral patiency” (Ladak et al., 2024). 

Going forward, it will be important for researchers to study the relationship between how people perceive the moral agency and the moral patiency of AI, Ladak and colleagues conclude. While perceptions of human’s moral agency and patiency often overlap, this may not be the case for AI. 

“The moral judgments we make about what to condemn and condone depend not only on the facts we know but also on the counterfactuals we imagine—what we believe ‘might have been,’” Effron and colleagues wrote. “People’s capacity to condemn and condone whom they wish may be limited only by their imaginations.” 

After the 2020 U.S. presidential election, for example, researchers found that many Republicans believed that the COVID-19 pandemic would be “a whole lot better” if Donald Trump had been reelected, while Democrats reported believing it would be a “whole lot worse” without Joe Biden in charge. In both cases, Effron and colleagues explain, these partisan individuals were comparing the reality of the pandemic during Biden’s presidency with imaginary counterfactual content about what might have happened if Trump was still president. The direction of this comparison differed depending on their existing political affiliation, with Republicans comparing reality up against an idealized alternative and Democrats comparing reality down against a worse imaginary outcome. 

“Because we cannot prove what might have been, we have the flexibility to imagine content that fits with our existing beliefs,” Effron and colleagues wrote. “This process facilitates moral inconsistency: We generate counterfactual content that allows us to justify the moral judgments we prefer.” 

Counterfactual thinking isn’t limited to the political domain, however—we also use it to justify our own morally questionable behavior and to imagine how alternative circumstances might have influenced the behavior of other people. This isn’t always a bad thing, Effron and colleagues note, because counterfactual thinking can help us understand causality by considering how a situation might have gone differently. Problems begin to arise, however, when we prioritize our own counterfactuals over the facts, which take considerably more effort to gather. 

“Counterfactuals—possibly even more than facts—are appealing fodder for motivated reasoning,” the researchers wrote. “Constructing counterfactuals is easier than collecting facts. Facts require observation; counterfactuals just require imagination.” (Effron et al., 2024). 

Intuitive theories fuel our explicit beliefs 

Despite how outspoken we can be about our political beliefs, people’s moral thinking is often guided by intuitive theories about the moral value of actions and the character of people who perform them, researchers led by APS Fellow M. J. Crockett, who studies the psychology of human values at Princeton University, wrote in Current Directions in Psychological Science

“Intuitive moral theories may guide people’s judgments implicitly, in ways they cannot articulate, in contrast to explicitly argued philosophical theories,” Crockett and colleagues wrote. 

This process prioritizes efficiency over precision, leading to theories that are described as “resource-rational” because they require individuals to invest fewer cognitive resources into moral decision-making, even if they cause people to believe things that aren’t true, the researchers continued. The quick thinking supported by intuitive moral theories also makes it easier to work with people who hold similar beliefs. 

“Negotiating terms for specific moral agreements is computationally intensive; intuitive theories of value offer different ways of efficiently approximating the agreements people would make with more cognitive resources,” Crockett and colleagues wrote. 

Rule-based morality, for example, provides a cognitive shortcut for approaching common situations like waiting in line, in which equality is safeguarded by discouraging queue-cutting. Outcome-based theories, on the other hand, can help guide people’s approach to ongoing social relationships such as marriages in which individuals may agree to weigh their well-being equally over time rather than within each discrete interaction. 

People’s moral preferences may also depend on who they are judging, as well as the context in which they are judging a behavior. Many people may prefer romantic partners who will prioritize their needs through a loyal rules-based approach, for example, but prefer outcome-based politicians that can balance the needs of many constituents at the same time, the researchers suggested (Crockett et al., 2004). 

Framing intuitive theories as moral units competing for cultural selection could help further the study of how people learn both specific beliefs and the mental processes used to generate new beliefs. The researchers caution that even though an intuitive moral theory is considered successful if it helps people find new social partners, streamline negotiations, and appear trustworthy to other people, that doesn’t guarantee the theory will be used for good. 

Back to top

Feedback on this article? Email [email protected] or login to comment.

References 

Crockett, M. J., Kim, J. S., & Shin, Y. S. (2024). Intuitive theories and the cultural evolution of morality. Current Directions in Psychological Science, 33(4), 211–219. https://doi.org/10.1177/09637214241245412  

Effron, D. A., Epstude, K., & Roese, N. J. (2024). Motivated counterfactual thinking and moral inconsistency: How we use our imaginations to selectively condemn and condone. Current Directions in Psychological Science, 33(3), 146–152. https://doi.org/10.1177/09637214241242458  

Ladak, A., Loughnan, S., & Wilks, M. (2024). The moral psychology of artificial intelligence. Current Directions in Psychological Science, 33(1), 27–34. https://doi.org/10.1177/09637214231205866

Robertson, C. E., Akles, M., & Van Bavel, J. J. (2024). Preregistered replication and extension of “Moral hypocrisy: Social groups and the flexibility of virtue”. Psychological Science, 35(7), 798–813. https://doi.org/10.1177/09567976241246552 


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.