Rotten Reviews
Back in the early 1980s, the actress Dame Diana Rigg began asking colleagues in the theater and film industries — including some of the world’s most honored thespians — to share their worst-ever reviews. The responses turned into a collection, No Turn Unstoned, which eventually drew a cult following as she toured university campuses reading excerpts from the book.
In that spirit, we asked some distinguished APS members, all of whom are leaders in their areas of study, to share their own worst wounds from the critics (in this case, journal editors, peers, job recruiters, or even laypeople hearing about their studies). These researchers offered up some of the weirdest, harshest, or — in hindsight — amazingly off-base reviews they suffered. Some of the respondents simply shared direct quotes from the reviewers, while others provided full background stories. In many cases, these papers were submitted to other journals and became seminal pieces of work.
Here are some of the most memorably brutal critiques and reactions leveled against some of psychological science’s leading lights.
Toni C. Antonucci
University of Michigan
What to say after one reviewer didn’t think I understood the convoy model and told me that I should read some of Antonucci’s work to get a better feel for it?!
Timothy B. Baker
University of Wisconsin–Madison
When I was a fairly young investigator, I was invited to give a talk to an organization that was associated with the production and sales of alcoholic beverages. The organization paid for my travel to and my stay at a swank resort.
During the first morning meeting of the conference, I heard some very interesting talks given to the attendees. One talk was about how the association between alcohol use and crime was spurious because personality factors caused each. Another talk was a very scholarly examination of how alcohol expectancies could powerfully shape behaviors that followed alcohol consumption. As my talk was approaching, a copresenter leaned over to me and said, “You have it made. Once you are invited as a keynote speaker, you are then invited every year afterwards. Next year is in Australia!”
I then got up and began my talk, arguing that to some extent we researchers had focused overly much on dysphoria as a setting event. I presented data showing that positive affect was directly related to the use of cocaine, heroin, nicotine, and alcohol. I also spoke about how these data suggested that similar motivational mechanisms were involved in the dependence syndromes associated with these agents. At this point a rather elderly, distinguished-looking gentleman raised his hand to ask a question. (I later learned that he was a senior officer in the organization and very influential.) I called on him and he asked two questions: (1) So, you are discussing positive affects, not the positive effects of alcohol? And, (2) you are suggesting that in some way alcohol has something in common with cocaine and heroin?
I gave the rest of my talk to an audience that seemed strangely wooden and distant. They seemed to be averting their eyes as if they did not want to witness the inevitable unfolding of a humiliating event. At the end, which was eerily silent, I returned to my seat, where my fellow presenter leaned over and whispered to me, “I’ll send you a postcard from Australia.” No one spoke to me over the next 2 days of the meeting. Needless to say, there was no trip to Australia.
David H. Barlow
Boston University
“The study as presented fails at so many levels, I am disinclined to list all the specific individual problems.”
Linda M. Bartoshuk
University of Florida
I’ve never had a paper rejected for absurd reasons, but I’ve been forced to make changes that were absurd. I remember a study on children in 1991 done with a colleague, Jean Ann Anliker, which had to be completely rewritten after a reviewer refused to let us use the term “supertaster” because it had not previously been in the literature. We rewrote the paper and introduced the term in later papers.
BJ Casey
Weill Cornell Medical College
Our seminal developmental imaging paper (Galvan et al., 2006, The Journal of Neuroscience), which provided the empirical evidence for our theoretical imbalance model of adolescent brain development, was rejected by Nature in 2005. The letter stated, “We do not believe your manuscript represents a development of sufficient scientific impact to warrant publication in Nature.”
But then this work was featured in a brief article on the teen brain by Nature the year following its publication in The Journal of Neuroscience. The paper and model also were featured heavily in the National Institute on Drug Abuse’s strategic plan in 2010 and highlighted on its website.
The empirical paper has been cited more than 500 times in the past 5 years alone, and the theoretical model that emerged from it has been cited more than 1,100 times.
The best part is that if you look closely at the picture of the teenager being scanned in the Nature article, it’s my son, Jonah. So he got published in Nature before his mom did!
Stephen J. Ceci
Cornell University
I think of all of the harsh comments, the one I best remember is from a review about a study I did with another assistant professor back in 1980: “I advise these young authors to consign this manuscript to their developmental juvenilia and not try to publish it.” American Psychologist rejected it, of course. It went on to be cited nearly 800 times in Behavioral and Brain Sciences.
Susan T. Fiske
Princeton University
There was the job interviewer who said, “If we were hiring the person we liked better, it would have been you.”
Morton Ann Gernsbacher
University of Wisconsin–Madison
A reviewer said that publishing my critique of mirror neurons would be “dangerous.” Dangerous? Seriously? I’m 5’1”, I always have manicured nails, and I hate the sight of blood. Dangerous?
The irony is that other people have since published mirror neuron critiques, including an entire book dedicated to The Myth of Mirror Neurons. So dangerous!
Sam Glucksberg
Princeton University
My favorite bad review asserted that my research-grant proposal was “just another knee-jerk reaction-time study.” The other reviewers were highly positive, so I got the grant (NSF) anyway. This reviewer was right about the dependent variable (one of many), but what did she/he mean by “knee-jerk”?
Gary P. Latham
University of Toronto, Canada
“If I did not know Latham, I would not have read past the abstract. Unfortunately, I did; the paper was even worse.”
Elizabeth F. Loftus
University of California, Irvine
In 1983, a student (Wesley Marburger) and I published an article in Memory & Cognition with the unusual title, “Since the Eruption of Mt. St. Helens, Has Anyone Beaten You Up?: Improving the Accuracy of Retrospective Reports With Landmark Events.” We found that providing a landmark event (in this case, the 1980 volcano eruption in Washington State) for subjects who are trying to remember their past victimizations reduced the problem of misdating past events and resulted in more accurate reporting. As of February 2015, the paper was cited 324 times in Google Scholar. Citers might be surprised to learn that without some perseverance on the part of the senior author, no one might know about this paper: It was rejected by five journals. After the fifth, I read all the reviews and saw that the ones provided by the Memory & Cognition reviewers were least negative. I went back to that journal and made a case for publication.
Henry L. “Roddy” Roediger, III
Washington University in St. Louis
In 1975, Bob Crowder [now deceased; a Yale professor at the time] and I wrote a paper entitled “A Serial Position Effect in Recall of United States Presidents.” Like most authors, we were hopeful others would find it of interest and that reviewers and an editor would accept it. We submitted it to the Journal of Verbal Learning and Verbal Behavior (now the Journal of Memory and Language) edited by Ed Martin. In those days, the idea of triaging papers had not yet taken hold in psychology. Ed blazed a trail in triage by sending our manuscript back to us by return mail. His action letter was two sentences long:
Dear Dr. Roediger:
Your manuscript with Robert G. Crowder, “A Serial Position Effect in Recall of United States Presidents,” is, of course, rejected. Check any source and you will see the presidents best recalled are most often cited in print, so all you have shown is that frequent items are better recalled.
Sincerely,
Ed Martin, Editor
The paper eventually appeared in the now defunct Bulletin of the Psychonomic Society, which had a rejection rate of 0% (for members of the society). We discussed Martin’s criticism there. Interestingly, the original data from that rejected paper recently appeared in Science in November 2014, as part of the data in a paper on “Forgetting the Presidents,” by me and K. Andrew DeSoto.
Comments
We submitted a paper to Nature, which got rejected on the grounds that the research had shown the peripheral suppression in cats and it was safe to assume the effect would be observed in primates.
We went back to the lab and showed the effect in marmoset monkeys. It got rejected again, this time on the grounds that the effect was not “new”, as it had been already shown in cats (the paper we sent to another journal after the Nature rejection).
The rejection that still smarts most for me said “The research reported in this paper is technically adept, but boring as hell. Therefore we decline to publish it.”
Two truly great rejections come immediately to mind. The first was the reaction of the editor of a high-level journal to a response I made to two reviews, one of which said the current version of our paper should be rejected but proposed revisions after which the paper could make a contribution. The other review literally MADE UP “quotes” from the paper, defeated them handily, and recommended rejection. Never having seen anything quite like this before (or since) I tried to be polite and indirect in my reply, which documented line by line that nothing even remotely resembling the language attributed to us had been written in the manuscript, saying that we could deal with the requests for revision made by the first reviewer, saying that the second reviewer “had reached the right conclusion but for wrong reasons” (that is, for MADE UP reasons, not even real ones), and asking if it was ok to resubmit. The editor said no, allowing as how I had myself admitted that both reviewers had reached the right conclusion — to reject the submission — and the editor was sticking to it. This rather mind-boggling reply was hard to explain to my graduate-student first author. I mean really, what do you say to something like that? I did vow, however, never again to be too indirect or too underservedly polite in responding to reviewers (or editors) who got things wrong! After revisions in accord with the first reviewer’s criticisms, the paper was published in another high-level journal. The second is my all-time favorite review, of my former student Sian Beilock’s first attempt to publish her now well known and influential work on expertise and choking under pressure. This review said that our paper put forth “What could only be called an otherworldly view of attention”, that our findings and our conclusions were all wrong, and that “the best response to these data would have been to seek other methods”. This review is quoted in full in Beilock’s 2001 publication of the results in Journal of Experimental Psychology: General. So my first example shows off an editor who thought that a review that made things up and then criticized them was as valuable as a review that helped the authors to get better, and my second shows a reviewer who was so certain-sure of his or her own views as to be willing to deny a methodology, not because the methodology was flawed but because it produced results the reviewer didn’t like! The two students who bore the brunt of these review processes survived and have gone on to flourish in excellent careers. But I must admit that I still hold a bit of a grudge! These might now be funny stories to tell and they certainly carry lessons to be learned, but things were not so funny at the time for the students who got hammered!
John Garcia’s experiments demonstrating long delay conditioned taste aversion in rodents was rejected by nearly every journal. One reviewer famously said these data were “about as likely as finding bird shit in a cuckoo clock.” No one believed that classical conditioning could occur with an inter-stimulus interval greater than about 0.5 sec a la Pavlovian conditiong. and certianly not in one trial learning.
A two-word review: “Intellectually vacuous”!
I submitted an article that was critical of a U.S. government program that claimed success based on psychometric data, but refused to publish the details of the instrument used to obtain those data as it was “classified”. The reviewer wrote that I should provide details of the instrument (presumably by breaking into DoD offices at night).
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.