Presidential Column
APS and Open Science: Music to Our Ears
From most of the press accounts of the ambitious project on reproducibility in psychological research published in Science this past summer, one would not have learned that, under the leadership of APS, psychological science has taken the lead in addressing an issue that is highly relevant to most, if not all, contemporary sciences. Alan G. Kraut, who has recently retired as APS Executive Director, and who pushed APS to address this issue vigorously while he was in charge at APS, gives a breezy insider’s account of progress on this important issue.
-APS President C. Randy Gallistel
Remember those Behind the Music profiles on VH1? You know, the ones that recounted the career of some classic rock band — ah, those early successes, that inevitable midcareer setback (so sad), and then (ta-da!) the ultimate triumph? Well, that’s kind of how I look back on the history of APS and open science — like the trajectory of a classic rock group, except that here we are talking about the music of research transparency and reproducibility. (Okay, so maybe not acid rock.)
Today, I am proud that scientists think of APS first when their organizations are looking for an example of a group that promotes good data practices in research and publishing. I have been at many meetings here in DC where we have been cited for excellence (a Grammy?) — at the National Science Foundation (NSF), in presentations by National Institutes of Health (NIH) Institute Directors, and by committees of the National Academy of Sciences. Language we developed on replication even made it into Congressional reports. But (cue the cheesy violin music) … it was not always that way.
Sure, we achieved some successes almost from the beginning. Early APS Convention sessions on using nontraditional statistics to better detail effect size or probability always were SRO. Bill Estes, Founding Editor of our flagship journal, Psychological Science, used to lament the problematic nature of the .05 significance test even when it was the staple of that journal’s early issues. When James Cutting became Psychological Science’s Editor in 2003, he experimented with different ways of asking for statistics. And, over time, it got so you could predict the most downloaded article from just about any APS journal based on whether its methodology was aimed at transparency, even when the article was critical of current practices. (Think p-hacking, as in the Simmons, Nelson, and Simonsohn Psychological Science article on “false-positive psychology.”)
But APS’s dedication to open science as an organizational priority truly began at a December 2011 lunch with the late APS Fellow Richard M. Suzman, then the National Institute of Aging’s (NIA) Division Director for Social and Behavioral Science; Robert M. Kaplan, then the NIH Director of the Office of Behavioral and Social Sciences Research (OBSSR); and me. (Remind anyone of how Cream began? Or is that just me?) The general discussion centered on methods in research that would lend credibility to published results. (Now it reminds you of Cream, right?)
NIA was considering how to release huge amounts of data from its Health and Retirement Survey for reanalysis. Suzman wondered whether he should give direction to journal editors on limiting the false positives that would certainly emerge. After all, if you run thousands of tests on a huge dataset, you are bound to find a certain number that hit statistical significance by chance alone. And “you know those would be among the first published,” he said. Meanwhile, Kaplan was just starting to analyze NIH clinical trials in a way that would uncover a curious finding: that trials done before an NIH requirement to preregister hypotheses tended toward significance, and trials done afterward tended toward nonsignificance. I shared my opinions and so many others I had heard from our members about advocating for transparency in science. The discussion among psychological scientists was seemingly everywhere — on listservs; at meetings; and among APS editors, Board and committee members, and other leaders — about issues like “the file-drawer problem,” belief in (or suspicion of) social priming effects, and how to get important replications (and nonreplications) published.
But that lunch also happened during something of a low period caused by the ugly uncovering of a few now well-known cases of fraud in our field — the case of Diederik Stapel being the most visible — that no doubt would have been uncovered earlier if better reporting methods had been in place. And don’t forget that during the same period, The New York Times and other outlets rightly ridiculed research on precognition (i.e., ESP) published in a leading psychology journal after (rigorous?) peer review. Some thought psychological science was on the ropes — not quite the tragedy of Yoko breaking up the Beatles, but bad enough that a number of us wanted to take action. We knew that psychological science could do better. I had written as much.
So, after a number of additional lunches (hey, a band’s gotta eat!) and a couple of false starts — par for any comeback — we organized a small meeting in June 2012 at the APS headquarters. Among those in attendance were leading psychological researchers — APS Past President Henry L. “Roddy” Roediger, III; Nobel Laureate and APS William James Fellow Danny Kahneman; and APS Fellows Brian Nosek, Hal Pashler, and Jonathan Schooler. Barbara “Bobbie” Spellman, then-Editor of Perspectives on Psychological Science, also attended, along with representatives from OBSSR (Kaplan), NIA (Suzman, Lis Nielsen, Jon King), the National Library of Medicine (Deborah Zarin), and the National Institute of Mental Health (Stefano Bertuzzi). We began to explore issues related to replication and the reporting of false-positive results. And we began to formulate strategies for researchers, journal editors, and funding organizations that might remedy these problems, starting with behavioral science. That meeting was followed in September 2012 by a smaller one that included Eric Eich, then-Editor of Psychological Science, and focused on specific ways APS might leverage our own journals in an effort to lead the broader field.
These discussions resulted in changes — some voluntary, some not — in Psychological Science. We sought expanded reporting of methods and results and detailed descriptions of how sample sizes were calculated. We encouraged authors to report all variables, gave researchers the opportunity to register experiments, started offering “badges” for good practices, and so on. APS became the first organization to recognize open practices this way. (Read Eich’s editorial, “Business Not as Usual,” for a fuller description; current Interim Editor Steve Lindsay has built on this good work.)
The September 2012 meeting also continued discussion of a replication-report initiative first intended for Psychological Science that ultimately landed at Perspectives on Psychological Science under Spellman. APS Fellow Daniel Simons,* who had approached me about a replication project before the meeting, also attended and became coeditor with Alex Holcombe and now including Jennifer Tackett of our Registered Replication Reports initiative. In 2013, 31 laboratories from 10 countries applied to participate in the first replication project, numbers that were way beyond expectations. We knew we had a hit on our hands.
We also commissioned “tutorials” to educate scientists on effect size, confidence intervals, and related topics. Some tutorials have run regularly as workshops at the APS Annual Conventions. APS Fellow Geoff Cumming’s six-part online course, “The New Statistics: Estimation and Research Integrity,” is among the tutorials that are freely available online. Cumming wrote a book and a popular open-access Psychological Science article on the same topic.
In November 2012, Spellman published a full Perspectives issue on “Replicability in Psychological Science and Research Practices.” We shared the issue free of charge, and it was accessed more than 350,000 times in just its first few weeks. It now has been downloaded in whole or in part more than 660,000 times. (Platinum!) The special issue also received a great deal of attention from science writers in both social and traditional media. (Okay, not Billboard.) Since then, Perspectives has published a number of related articles, all of which are free to access and all of which have massive download numbers.
The list of APS’s open-science achievements goes on and on: We organized a cross-cutting half-day “miniconference” titled “Building a Better Psychological Science: Good Data Practices and Replicability” at the 2013 APS Annual Convention. The next day, we held an informal gathering of 30 or so editors and associate editors, senior researchers, and representatives from NSF, NIH, and the President’s Office of Science and Technology Policy to discuss these issues further and begin building consensus on next steps. Based on the response to this gathering, we firmly believe similar conferences will be useful and even essential in a consensus-development process. Another mix of formal and informal open-science events was held at the meeting of the European Society of Cognitive Psychology (ESCoP) later in 2013, again receiving a wonderful reception. Since then, we have held other meetings and made many presentations about what APS is doing. We continue to share our activities as opportunities arise.
For instance, APS Past President John Cacioppo helped organize a 2014 NSF concert … er, workshop, on “Social, Behavioral, and Economic Sciences Perspectives on Robust and Reliable Science,” which APS helped to fund. That meeting led to another one focused specifically on how journals might promote open science. The journals meeting was the brainchild of Brian Nosek, who recently had founded the Center for Open Science (COS), and Marcia McNutt, the Editor-in-Chief of Science. There is probably no more important person in the development of the open science movement than Nosek, who also was key at the earlier APS meetings and programs, and COS has become an important force in science transparency. (Full disclosure: I sit on the COS Board, and APS is funded by the Laura and John Arnold Foundation via COS for our own replication efforts. Arnold is also a major funder for COS and is itself an important force in this movement.)
Nosek and McNutt’s meeting on open practices through journal policies took place in November 2014 at the COS offices under the name “Transparency and Openness Promotion (TOP) Committee.” I was a part of the committee along with Spellman and Eich. The committee announced its Transparency and Openness Promotion (TOP) guidelines this past June in Science. Influenced by Eich’s guidelines for submissions to Psychological Science, the TOP guidelines are meant to encourage transparency and openness in all areas of scientific research. In the guidelines, Psychological Science’s first-of-their-kind badges for Open Data, Open Materials, and Preregistration are offered as examples of journal policies already encouraging adoption of open practices — APS and our journals are really leading the way. In fact, at one point, one of the 40 (yikes!) or so authors of the Science article suggested, “Why don’t we just say, ‘Do it like they do in Psychological Science!’?” Another factoid: Five of the 14 references in the Science piece are from APS journals. (Think of what the APS royalties would be if this were an album!)
We at APS are committed to developing and disseminating policies and practices among organizations, journals, and funding agencies that promote data sharing, sound research practices, and reproducibility. We’ve moved quickly over the past few years to take concrete steps in these directions, with several other initiatives planned for the near future. So watch this space. Who knows? We may just get the band together again!
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.