Improving Research Practices, From Beginning to End
Efforts to promote replication, preregistration, and new analytic approaches now represent just some of the advances psychological scientists have been making toward improving research practices in the field. With the recognition that long-accepted research practices have certain inherent problems comes the question: What now?
As the field tries to answer this question, the important mistake we must not make, says psychological scientist Alison Ledgerwood, is assuming that there will be an easy and obvious fix.
“The single most important lesson we can draw from our past in this respect is that we need to think more carefully and more deeply about our methods and our data,” Ledgerwood writes in her introduction to a special section on improving research practices in an issue of Perspectives in Psychological Science.
“Any set of results, whether empirical or simulated, give us only a partial picture of reality. Reality itself is always more complex,” Ledgerwood notes. “If we want to study it, we need to be honest and open about the simplifying choices that we make so that everyone — including ourselves — can evaluate these choices, question them, and explore what happens when different choices and assumptions are made.”
As editor of the special section, Ledgerwood has underscored this point by assembling a series of articles focused on improving research practices at various points of the process, from deciding how to optimize the design of a single study to conducting a comprehensive evaluation of an entire research topic.
Choosing a Research Strategy
Determining an optimal sample size — one that maximizes statistical power while accommodating practical constraints — is an important step in the design of any research study. In their article, Jeff Miller and APS Fellow Rolf Ulrich propose a quantitative model that enables researchers to calculate the sample size that maximizes “total research payoff” across the four possible study outcomes: true positive, false positive, true negative, and false negative. As part of the model, researchers must explicitly weigh the relative importance of these outcomes and identify their assumptions about the base rate of true effects in a given domain, thereby making clear the values and assumptions that guide their thinking.
Checking Statistical Assumption
Examining whether data meet or violate the assumptions of a given statistical test is a critical component of data analysis, but it is a step that is sometimes overlooked. Failing to check these assumptions means that the results of these statistical tests and the conclusions drawn from them may be totally invalid. Louis Tay and colleagues offer a tool, “graphical descriptives,” that aims to make this process clearer and easier for researchers. The tool generates data visualizations that allow scientists to see whether their data meet various statistical assumptions; with these visualizations in hand, the scientists also can clearly communicate rich details about their data set when reporting their results.
Dealing With Data
Each of the choices that researchers make in deciding how to process their raw data — whether variables should be combined or transformed, when data should be included or excluded, how responses should be coded — shapes the resulting data set in a particular way. While there may be best practices that guide certain decisions, scientists aren’t always choosing between an obvious right and wrong answer. Using “multiverse analysis,” an analytic approach proposed by Sara Steegen, Francis Tuerlinckx, Andrew Gelman, and Wolf Vanpaemel, researchers can see how various data-processing decisions would affect their results. The tool enables researchers to understand whether various decisions influence outcomes in meaningful ways and can help them transparently report patterns of results across multiple possible data-processing decisions.
Meshing Meta-Analysis With the Real World
Meta-analysis offers an increasingly popular tool for evaluating the combined research output in a particular domain, but it is subject to particular shortcomings just like any statistical tool. In their article, Robbie C. M. van Aert, Jelte M. Wicherts, and Marcel A. L. M. van Assen present simulations indicating that p-hacking and effect-size heterogeneity cause two meta-analytic techniques, p curve and p uniform, to produce biased estimates of the average population effect size.
On a similar theme, Blakeley B. McShane, APS Fellow Ulf Böckenholt, and Karsten T. Hansen discuss how several common meta-analytic approaches yield biased estimates in the presence of publication bias and effect-size heterogeneity — conditions that the authors note are ubiquitous in psychological research. The authors point researchers toward a meta-analytic approach that does a better job of handling both methodological issues.
To read the Special Section on “Improving Research Practices: Thinking Deeply Across the Research Cycle,” click here.
Comments
Interesting
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.