Despite Occasional Scandals, Science Can Police Itself
Due to the fraud investigation of Diederik Stapel, psychological science has recently been put under a magnifying glass, and questions (both fair and unfair) have been raised about the integrity of the field. APS Executive Director Alan Kraut addressed some of these questions in a commentary for the December 9, 2011 issue of The Chronicle of Higher Education. We have reprinted his column below.
The public has always been fascinated with the scientific mind, including its corruption. So it is no surprise that the sordid case of the Dutch researcher Diederik Stapel grabbed headlines for a few days, including prominent stories in The New York Times, Los Angeles Times, Chicago Tribune and this publication [Chronicle of Higher Education]. The news stories came after the journal Science expressed concern about one of Stapel’s published papers, which is under investigation for data tampering.
It is already clear that this one suspicious paper is just the tip of the iceberg. In fact, Stapel had been under fraud investigation for some weeks when the news stories broke. The investigation, by Tilberg University in the Netherlands, where Stapel was until recently a professor, could lead to the retraction of dozens of papers by the social psychologist, published over a period of 10 or more years. Stapel outright lied to his colleagues, including many students, claiming he had data sets that could be used legitimately in experiments they worked on together; in fact such data never existed.
Stapel has owned up to his fraudulent acts, and voluntarily relinquished his PhD. Before this is over, it is likely that dozens of papers by his guiltless students and colleagues will be withdrawn as well, and their PhDs called into question. My organization, the Association for Psychological Science (APS), represents the interests of scientific psychologists, and so is centrally involved in this issue. But the association is also directly affected: A few of Stapel’s articles were published in our flagship journal, Psychological Science.
Such egregious cases are rare, and they are harmful to the scientific enterprise. But it’s important that they be recognized as the aberrations they are. Science is not immune to lying and cheating, any more than banking, medicine, or the law. It is also worth noting that Stapel was caught. True, he did get away with his intellectual crimes for far too long, embarrassingly so, but in the end it was the suspicions of his colleagues and students that exposed him. Scientific inquiry is guided by a set of laboratory conventions and publishing rules that promote integrity and minimize the publication of false conclusions. This is equally true of all the sciences, just as it is true that all the sciences have been vexed by scoundrels.
Is this system perfect? Not by a long shot, but what’s important is that the system is constantly under scrutiny by scientists themselves, who use the tools of science to expose and correct its flaws. Most of these flaws and concerns are undramatic — not the stuff of headlines. For example, we just published one paper, and will soon be publishing another, that takes the field to task for some common but questionable research practices. The first, by scientists at the University of Pennsylvania and the University of California at Berkeley, demonstrates how some widely accepted methods for reporting and analyzing data can lead to an unacceptable rate of false positives, which are results that appear to be valid, but in fact are not. This paper explains how simple things — like not reporting all dependent variables or conditions, or changing the original number of research subjects during the course of an experiment, or ignoring results that seem oddly random or unrelated to any hypothesis — can artificially boost false positives.
The second paper actually demonstrates that these practices are used in the nation’s most elite labs more commonly than has been previously acknowledged. The study, by scientists at Harvard, Carnegie Mellon, and MIT, uses a rigorous “truth serum” methodology to elicit the first honest look at how scientists typically conduct experiments — and it finds the system flawed. Indeed, fully a third of those scientists surveyed admitted to fudging their data using some of these practices.
These are unwanted conclusions — we as scientists would like to be more rigorous — but the crucial point is that they are evidence of science policing itself. And these scientists are offering up some simple, concrete, low-cost solutions to the broad problem. They would, for example, require scientists to stick with their original plans for data collection and to list all variables and conditions, even when they fail to yield significant results. Others have proposed public online data repositories, which would make all data transparent, including failed replications. In fact, we are now considering these solutions for our journals.
The scientists who conducted these two time-consuming studies of scientific methodology took time away from their own research projects because they felt it was important to put laboratory science itself under the lens—with hopes of improving its integrity and value. That value, ultimately, is psychological science’s payoff for the public. Psychological scientists do not work in isolation from the broader culture, squirreled away in a lab, asking arcane questions. They seek to better understand human motivation, emotions, self-control, interactions of genes and environment, judgment and decision making — so that we might lead happier, healthier, more productive lives.
Illuminating these building blocks of human behavior affects everything from public health and disease prevention to financial choices, energy conservation, and even political and moral judgments. As a result of behavioral-science research, we now know better ways to teach our kids mathematics and reading; clearer ways for physicians to explain health risks to the typical patient; and simple ways to motivate young adults to save for the future.
And if we want a fuller understanding of how the brain works in order to better address Alzheimer’s, schizophrenia, post-traumatic stress disorder and other serious mental afflictions, we’re going to need an equally full understanding of the basics behind thinking, learning, remembering and other behavioral-science issues that are brain-related.
The above-mentioned studies of laboratory ethics, and proposals for change, are already being widely discussed in the field. They will most likely lead to self-examination, and then to improvements in the conduct of research — and ultimately to more truthful and helpful answers to the riddles of behavior. Notably, they will not catch the Diederik Stapels of the world red-handed. Those rare cases of blatant immorality must be rooted out and publicly exposed — as this case was, by vigilance within the field.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.