Presidential Column
Our Urban Legends: Journal Reviews
In my last column, I discussed urban legends about journal publishing, noting that these have subtle and not so subtle influences on how research is done and presented that can inadvertently undermine the development of an increasingly cumulative and robust psychological science. I picked particularly on the legend that to be publishable in a high prestige journal a paper must meet the Newsworthy Definitive Solutions or NDS criterion: namely that the paper should include a handful of studies that “definitively test rigorous new theory-derived predictions that solve a newsworthy major problem.” In this follow-up column, I focus on legends about journal policies and practices that may influence reviewer and editor behavior in journal publications and hence further effect how research is done and presented and what our science becomes.
From Importance to Newsworthiness
Once upon a time, long ago but not far away, journal editors used their own judgment to quickly evaluate the “importance and potential significance” of submissions, without farming them out to two to four (or more) additional reviewers. Restricting the reviewers to N = 1 is one way to enhance the reliability of the judgment. In the decades since then, the criterion of newsworthiness seems to be replacing importance/significance in many publication decisions. Newsworthiness may allow easier consensus with less time debating uncomfortable and maybe insoluble value judgment issues about importance.
The newsworthy criterion makes sense as long as the contribution is really new, not just recycled, repackaged, and more fashionably labeled; the findings are solid and interesting; and the reviewers remember that not everything that’s news is fit to print, at least not in the front pages of our science. Even better if newsworthiness is disconnected from the “definitive solution,” thereby avoiding the problems with the Newsworthy Definitive Solution (NDS) discussed in my previous column. One hopes that the newsworthy contribution is at least as welcome when it opens unexpected routes for new questions as when it closes doors to old ones. I cheer it loudest when the effort to be newsworthy leads to short introductions, lean discussions, and data-driven reports. In return, the author deserves to get short and rapid straightforward reviews without invitations for endless trivially different revisions.
Replication Expectations
In 1989, two chemists from well-known universities reported that they could produce nuclear fusion in a jar of water (Science, March 28, 1989). It seemed unbelievable, and it was. Invalidation followed quickly when other laboratories were unable to replicate their claims, illustrating how wonderfully self-correcting science can be.
Replicability, one learns in high school, is a basic requirement for building a cumulative science, and researchers are supposed to be in big trouble (and not just with their self-concepts) when their work cannot be replicated. In many areas of psychological science, however, replication efforts are complicated and even impossible because there are subtle (or gross) variations between studies in the methods (and samples, etc.). What are the implications of this difficulty for the reviewer/editor? And for the responsible author? While there may be no definitive answer, let’s at least worry about it, and give a high priority to replication, insisting whenever possible on procedures and the use of shared tools that allow and facilitate replication by independent others.
We also need to make it “newsworthy” when there are well-done failures to replicate important claims, and allow them into our journal pages, sometimes even in the front pages and not just in a footnote. But although that may sound self-evident, failures to replicate in many areas of our science still are shrugged off, rather than seen as deeply disturbing. Such nonchalance may persist as long as such failures are considered neither newsworthy nor deserving space and attention in relevant journals. It makes it tempting for researchers to publish hot newsworthy findings prematurely, without making the essential efforts to assure that the newsworthy phenomenon is robust enough to be found more than once.
What Not To Do in Journal Reviews
I have little advice to give journal reviewers and editors about what they should do, but I have a long wish list for what I hope they won’t do: miss the point of the work, pick on trivia, drag in their own work, make ad hominem or ad feminam remarks, and forget to keep a professional tone even when dealing with a perceived bitter enemy who dislikes them and whom they dislike even more. And don’t mislead the researcher to think that with a few revisions and a few more months of labor to please your requests they will have a chance when in fact they don’t. You may not even be the one who reviews it in the next round, and the new reviewers are apt to find their own “additional concerns.” Publishing new findings in a science has time urgency, and unnecessary long delays in the review process are unacceptable. And, yes, all reviews combined for a submission should not exceed the page limit for the article.
The Worst Sin: Micromanaging Others’ Research
It’s especially poisonous when reviewers/editors think it’s their job to micromanage articles, trying to turn them into one of their own shiny products, requesting multiple rounds of revisions (it used to be one on average, now I hear about four to five rounds) only to reject it at the end, having wasted everybody’s time and goodwill sometimes for a few years. A young colleague who recently suffered this plight writes: “I think this is more of an epidemic in top tier journals, in which due to their high status, their editors develop a sense of entitlement to shape the manuscripts in their vision rather than in the author’s. And their aversion to taking any kind of risk is an impediment to the field.”
On the same point, a distinguished researcher describing an article co-authored with a student tells me: “We just spent 1.5 years after tentative acceptance going back and forth with one of our action editors, who literally was writing extended passages of text and instructing us to insert them into the discussion section. We didn’t even agree with some of the passages, but we found ourselves with such ‘sunk costs’ that we included the editor’s text into the manuscript.”
Bias
You don’t have to be an expert on the Implicit Association Test to know that bias, both obvious and subtle and often outside awareness, creates big problems for reviewers, editors, and even more for those who depend on them. Fortunately our biases and conflicts within science usually don’t lead us to shoot innocent people in dark alleys, but they do create dilemmas. Tomes have been written within psychological science about the pernicious effects of bias, sometimes by the same people who are both its practitioners and victims, as reviewers and as applicants for journal space or research funding.
The dilemma heats up when the controllers of the resources are in the remarkable position of anonymously (as if they were in a witness protection program) deciding the fates of applicants whom they know well, and might even loathe, sometimes while in the midst of fighting them in ideological turf wars that can trigger memories of Bosnia’s darkest days. I remember a review I saw years ago from one of our best journals. Neither the editor nor the reviewer seemed to be suffering from cognitive dissonance in their one paragraph review. In it, the reviewer (anonymous of course), supported enthusiastically by the editor, said the submitted article (by respectable scientists) was “pseudo science,” and that the reviewer had “not even bothered to read it.”
We depend on the wisdom and care of the editor who has to choose who shall be the judges of the controversial work. Should the editor elect experts who represent the enemy camp or experts who are advocates on the contestant’s side? Do neutral experts exist in the demilitarized zone? Recognizing this dilemma, some of our journals invite suggestions from authors about reviewers they would like to include or exclude. On the authors’ side, particularly at more paranoid moments, this practice can trigger fears that the editor may do the opposite of what’s requested. Paranoia not withstanding, one hopes that, knowing the battle lines when the battleground is really “hot,” the editor will try to get reviewers who are well-informed but at least at short arm’s distance from the fighting.
On the reviewer side, there seem to be vast individual differences in attitudes about when to decline the reviewer role because of perceived conflicts of interest. Some respected and respectable scientists seem to have no qualms accepting review assignments for work by recent former students, extremely close friends, or colleagues that others would automatically decline. Ditto regarding reviewing the work of one’s worst ideological enemies, in which the content of the review can be predicted more from knowing the reviewer than from the materials being reviewed. If these differences really are so large, it might be worth some public discussion of guidelines in the grey areas.
Tone of the Review
Regardless of the decision the editor/reviewer reaches, the tone of the review and communications to the author matters, particularly when dealing with novices, as everyone who ever submitted a paper knows. My first experience with a psychology journal editor was in 1958 with what was then JASP, the Journal of Abnormal and Social Psychology, (now JPSP, the Journal of Personality and Social Psychology). Daniel Katz’s warm, encouraging personal review of the paper, when I think back to it now with today’s concepts, must have enhanced my self-efficacy, mastery orientation, and incremental theory about myself as a possible scientist when still a raw beginner. Especially when writing (or receiving) rejection letters, the tone indeed matters. And, as our journal articles say near the end, “In conclusion”: My grandmother, when discussing her version of “peer review,” always told me that kindness to strangers, as well as to colleagues, usually is a good idea. However, she also added, “be careful.”
For Future Attention…
Finally, to end like many journal articles do “for future attention”: Given its importance, it’s mysterious why it’s so hard to find attention to reviewing/editing in our graduate training programs and professional meetings. Public discourse to articulate and face tough, politically-charged issues (e.g. about how women have been treated in the sciences) has impacted how our field has developed. Peer review issues also deserve such open discourse. Modeling responsible reviewer practices for our students, and emphasizing their value in what we convey in training, might be worth considering. Maybe it merits a lab meeting? A spot at the conventions? Perhaps even re-thinking the priorities in one’s own professional goals? At least, there should be a resolution to complete the next overdue journal review in the pile on the desk a little more quickly. ♦
Comments
Thank you for the oppurtunity to read this article. I enjoy the read and learn a lot about what to do and what not to do. I also found it interesting that people write articles stating that they have proven or created something wonderful and in reality they had not. Well at least something that had not been able to be recreated. I will use helpful tips in my next wirtung assignment and I hope that it will help me in the long run.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.