Predicting Sexual Crime: Are the Experts Biased?
Leroy Hendricks had a long history of sexually molesting children, including his own stepdaughter and stepson. When he was 21, he was convicted of exposing himself to two girls, and he continued to prey on kids until he was sent to prison at age 50 for molesting two 13-year-old boys. He served ten years of his 5- to 20-year term, with time off for good behavior, and then was set free.
Except that the state of Kansas did not want him to be free. Under its Sexual Violent Predator Act, and based on expert mental health evaluation, the state decided that Hendricks remained a public menace and a threat to public safety. Authorities moved to confine him to a mental hospital, where he would remain indefinitely. Hendricks appealed his detention, and his case went all the way to the U.S. Supreme Court, which ruled against him in a narrow 5-4 decision.
The case of Kansas v. Hendricks exemplifies the difficulty in balancing individual liberty rights and the public’s right to protection from sexual predators. But it is only the most famous case of many that frequently come before the nation’s courts. Twenty states now have Sexual Violent Predator (or SVP) laws, which allow civil commitment of offenders who, even though they have paid their dues, are considered likely to continue as sexual predators.
These proceedings routinely rely on the judgments of forensic psychologists and psychiatrists, who use standardized tools to assess future risk. But how reliable are these tools? And how reliable are the forensic mental health experts making the judgments of risk? The fact is this important issue has never been rigorously investigated—until now. University of Virginia psychological scientist Daniel Murrie and his colleagues suspected that forensic experts are far from objective, indeed that they are influenced by the same powerful cognitive biases that shape all human decisions. Murrie set out to do the first empirical test of this possibility.
He wanted to test for what he calls “adversarial allegiance.” That is, he wanted to see if supposedly objective experts hired by the prosecution tend to make judgments favoring prosecution, and the same for the defense. It’s really impossible to study forensic experts’ biases in actual courts cases because the proceedings are adversarial. That is, prosecutors and defense attorneys deliberately seek out psychologists and psychiatrists who they have reason to believe will support their side of the case. To get around this real-life “selection effect,” Murrie created an elaborate deception to test forensic experts’ objectivity in a simulated legal proceeding.
He recruited more than 100 experienced, practicing forensic experts. He attracted them by offering free training (and continuing education credit) on the two most commonly used instruments used for sex offender risk assessments. Most of those who responded to the offer were psychologists (PhDs or PsyDs), and most had some experience conducting risk assessments on sexual offenders, often with these popular instruments.
Murrie did in fact train the experts for two days—which is comparable to what forensic experts in the field get in the way of training. The quid pro quo was that they would return in a few weeks to score actual offender files. The experts were led to believe that they were taking part in a formal, large-scale forensic consultation, but in reality they were all assessing the same four cases. Some believed they were working for the public defender’s office, while others believed they were working for a special prosecution unit focusing on SVP evaluations. To elaborate on the deception, all met with the prosecutor or defense attorney beforehand—though in all cases it was really an actor playing an attorney—who made slightly biased, but realistic, statements. For example, a defense attorney might say: “We try to help the court understand that the data show not every sex offender really poses a high risk of reoffending.” The experts were paid $400 for their participation, and the attorneys also hinted that there might be possible future opportunities for paid consultation.
The experts made their assessments based on authentic files, including court, criminal and correctional records, which included arrest documents; victim and witness statements; plea, judgment and sentencing documents; prison disciplinary records, and so forth. They also included actual previous psychological evaluations, plus a fabricated transcript of an interview using one of the training instruments, based on the offenders’ records. Three of the four cases involved child victims, and the remaining one involved adult victims.
Based on these genuine case records, all the experts completed the two risk assessments on each of the four SVP cases. As reported in an article to be published in the journal Psychological Science, the study revealed a clear pattern of adversarial allegiance—that is, biased decision making in favor of the “side” they were working for. Experts who believed they were working for the prosecution viewed the offenders as a greater threat to public safety, and those who believed they were working for the defense saw the identical offenders as much less dangerous. The effect of this allegiance bias was dramatic and, what’s more, it was much more exaggerated on the more subjective of the two assessment tools.
This is disturbing, but it’s consistent with recent and broader concerns about the validity of forensic evidence. Other research has called into question the objectivity of techniques such as DNA analysis and fingerprinting, and in 2009, the National Research Council warned that many popular forensic science techniques may not be as accurate or reliable as believed. The NRC urged further study of the biases that may skew forensic findings. Murrie’s findings suggest that the NRC’s concerns might be broadened to include forensic psychology.
Wray Herbert’s blogs—“We’re Only Human” and “Full Frontal Psychology”—appear regularly in The Huffington Post.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.