Moral Gridlock? Moralizing Issues Can Persuade—and Stifle Compromise
It’s easy for people to become entrenched in their views, but the language used to frame issues can help them to see things differently—or, conversely, to become even more entrenched.
Consider policies related to automation technologies like crime-surveillance software or hiring algorithms. Framing these policies through the lens of morality can push people to consider how the technologies may support or undermine our privacy, safety, and freedom. Centering economic concerns, on the other hand, may encourage people to prioritize the technologies’ price tags over questions of right and wrong.
Both methods appear equally effective when it comes to persuading people to change their minds, wrote Rabia Kodapanakkal (Tilburg University) and colleagues in Psychological Science. But moral framing can also make people more resistant to compromise.
“If one perceives the other side as holding an objectively wrong position, it does not make sense to compromise. For people who hold truly strong moral convictions, it would be akin to compromising on the answer to 2 + 2,” Kodapanakkal and colleagues explained. “Persuading and entrenching people may be a viable goal if one considers the changed attitude to be the morally correct one, but if moral framing and reframing are to be used to bridge political divides, such side effects are antithetical to the approach.”
Kodapanakkal and colleagues evaluated the effects of moral and nonmoral framing through a series of three studies, one on hiring algorithms (with 1,590 participants) and two on crime surveillance technology (with 2,151 and 1,015 participants). In each case, participants reported their level of support for the technology, the extent to which that support was based in moral conviction, and which specific moral foundations they felt related most strongly to their stance on the issue.
In all three studies, participants then read one of several descriptions of the technology in question.
In the hiring-algorithm study, participants either read about fairness-based objections to these algorithms, read about how costly they can be, or completed a neutral control reading that described how the technology works. The moralized reading focused on how algorithms can discriminate against job applicants on the basis of their age, gender, or location.
In the studies on surveillance technology, participants either read about either harm- and liberty-based objections to the technology, read about how expensive it can be, or completed a neutral control reading. The moralized readings largely focused on the potential for crime-surveillance technology to invade people’s privacy as they go about their daily lives.
Participants then once again reported their stance on the technology, as well as the extent to which that belief was based in moral conviction. In the third study on surveillance tech, participants also completed a compromise-based economic game and reported how their stance on the technology would influence their willingness to support a political candidate and a workplace manager who agreed with them.
Across all studies, moral and nonmoral arguments were found to be equally effective at changing participants’ views on both issues.
In the third study, participants in the moral-framing condition also reported being more likely to support a candidate or manager who was unwilling to compromise with the other side. Participants who read nonmoralized arguments were no more or less willing to work with a compromising candidate or manager than those in the control condition.
Similarly, participants who read the moralized arguments were less willing to compromise in the economic game, whereas nonmoral-framing and control participants remained equally open to compromising with other players.
On further analysis, participants in the moral-framing condition were found to report more anger and disgust toward the technology, whereas those in the nonmoral-framing condition focused more on the financial cost of implementing the technology.
This suggests that although attempting to sway people’s opinions with nonmoral arguments may not make them any more open to compromise, swaying them with moral arguments could make them less compromising.
Fostering this kind of uncompromising conviction may be desirable for political groups fighting for policies on moral grounds, but it can also contribute to political gridlock, preventing legislators from taking any action at all, the researchers wrote.
“The use of moral frames as a persuasion tool should be considered cautiously and assessed for potential side effects; otherwise the goal of bridging moral divides with these tools may backfire,” Kodapanakkal and colleagues concluded.
Feedback on this article? Email [email protected] or comment below.
Reference
Kodapanakkal, R. I., Brandt, M. J., Kogler, C., & van Beest, I. (2022). Moral frames are persuasive and moralize attitudes; nonmoral frames are persuasive and de-moralize attitudes. Psychological Science, 33(3), 433–449. https://doi.org/10.1177/09567976211040803
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.