Update on NIH Peer Review Changes
As previously reported (see the December 2007 and January 2008 Observers), the National Institutes of Health (NIH) is contemplating significant changes in its peer review system. Peer review has been put under the microscope and dissected, and NIH is now in the process of putting it back together again. Internal and external working groups studying the issue have issued their findings and recommendations, and NIH leadership is currently deciding which of the recommendations they will implement.
What follows is a summary of the “challenges” facing the current system and some of the suggested actions that were presented by the working groups:
Challenge 1: Reduce Administrative Burden on Applicants, Reviewers, and NIH Staff. One way of achieving this is to create a new “Not Recommended for Resubmission” category, which means some applications will be turned down right off the bat so as to avoid unnecessary hoop-jumping. Another is to have amended applications reviewed by a new panel rather than the original one to avoid bias (and extra work for the original panel). For reasons that need no explanation, the most popular, all-around recommendation of the whole lot is to shorten the application! (It currently stands at 25 pages.)
Challenge 2: Enhancing the Rating System. The suggestions here vary, from rating multiple, explicit criteria individually in addition to providing an overall score and ranking, giving unambiguous feedback to all applicants, and restructuring the rating criteria. (This is where psychological science comes into play — APS Fellow Hal Arkes’ work in psychometrics is the foundation for this last recommendation.)
Challenge 3: Enhance Review and Reviewer Quality. Allowing applicants and reviewers to correct factual errors before the official review will save time, in the form of a “prebuttal.” How to provide incentives that would encourage more people to serve as reviewers has been a topic that has long stumped NIH, but there are several suggestions here: flexible service and flexible deadlines (this has already been enacted for charter reviewers — see NIH Notice NOT-OD-08-026; http://grants1.nih.gov/grants/guide/notice-files/NOT-OD-08-026.html), and linking mandatory service to prestigious awards.
Challenge 4: Optimize Support for Different Career Stages and Types. During tight budget times, supporting young investigators is essential. The main recommendation for young scientist support is to pilot the ranking of early-career investigators against each other (and separately from senior investigators). For well-established scientists, the report calls for refining and boosting awards geared toward this pool, such as the NIH Director’s Pioneer Awards.
Challenge 5: Optimizing Support for Different Types and Approaches to Science. The name of the game here is “transformative” research, the fervor for which has also grabbed the National Science Foundation (see “National Science Foundation Update”, February 2008 Observer). A variety of awards already aim to stimulate such research (e.g., NIH Director’s Pioneer and New Innovator Awards, and the new, flashy-sounding EUREKA [Exceptional, Unconventional Research Enabling Knowledge Acceleration Award]), and the report endorses their expansion to a minimum of 1 percent of all R01-like awards. For clinical research, greater numbers of clinician scientists need to be enticed to serve on review panels by providing flexible options. Finally, to accommodate the growth of interdisciplinary research, an editorial board model for reviewing is recommended.
Challenge 6: Reducing the Stress on the Support System of Science. Most NIH investigators (72 percent, to be exact) have one grant but there are 783 who have four or more grants. One way to ameliorate the effects of the budget and review crunches is to require a minimum 20 percent effort on research project grants (so the investigator doesn’t spread her or himself too thin). This may not sit well with those investigators with many grants, but in this tight budget era some feel sacrifices have to be made.
Challenge 7: Meeting the Need for Continuous Review of Peer Review. This isn’t a new suggestion, but it’s being made once again — the report tells NIH to mandate a periodic, data-driven, NIH-wide assessment of the peer review process, which includes the collection of baseline data and the development of new metrics (psychometrics, anyone?) to track key elements of the system in the coming years.
To recap the process: During the last year and a half, NIH solicited feedback from NIH staff, advocacy organizations, professional societies, and the research community on how to improve peer review. NIH received over 2,800 comments, the majority of which concerned reviewers — namely how to recruit better qualified ones. The recommendations were compiled into this report, which was given to the NIH Director in early March, 2008. The plan is to implement select recommendations from the report, along with input from an RFI that gathered public comments, this spring and summer.
But there has already been some tinkering and experimenting going on (most staff are scientists, after all — they can’t help themselves), both at the Center for Scientific Review and at the Institutes themselves. Some of the ongoing initiatives include: videoconferencing, the development of an automated referral system, evaluations of shorter applications, and asynchronous electronic discussion (AED). Depending on how these pilot programs fare, they may or may not end up in the next phase of testing. Stay tuned for developments.
The NIH report on peer review can be found at: http://enhancing-peer-review.nih.gov/.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.