New Content From Perspectives on Psychological Science
Economics and Epicycles
Satoshi Kanazawa
Kanazawa compares the paradoxes and anomalies that behavioral economic studies identified in standard economics to the epicycles (i.e., models that attempted to explain the apparent retrograde motion of the planets) on geocentrism. Just as epicycles could not salvage geocentrism, behavioral economics cannot salvage economics as a model of human behavior because this model is fundamentally wrong. He suggests that evolutionary biology might be a better model of human behavior as it can explain behavioral and cognitive biases exhibited by humans and other species.
Why Hypothesis Testers Should Spend Less Time Testing Hypotheses
Anne M. Scheel, Leonid Tiokhin, Peder M. Isager, and Daniël Lakens
Scheel and colleagues propose that researchers use nonconfirmatory research activities to obtain the information to make hypothesis testing more informative. They propose that, before testing hypotheses, researchers spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts, and identifying boundary conditions for the proposed relationships and auxiliary assumptions. Scheel and colleagues believe providing incentives to engage in nonconfirmatory research would both develop stronger, more testable theories and address the reform in psychology urging researchers to be more mindful of the null-hypothesis significance testing that often makes research and theories uninterpretable.
Theory Before the Test: How to Build High-Verisimilitude Explanatory Theories in Psychological Science
Iris van Rooij and Giosuè Baggio
Focusing on effects might lead psychological science to depart from its primary goal of explaining psychological capabilities, van Rooij and Baggio explain. Drawing on Marr’s levels-of-analysis framework, van Rooij and Baggio discuss the benefits of extending the levels-of-analysis rationale to different areas in psychological science. They show how theoretical analyses can give a theory minimal plausibility even before it is tested by empirical data. Building plausible explanatory theories may contribute to leverage the study of effects to understand psychological capacities better and address critical issues in psychological science.
So Useful as a Good Theory? The Practicality Crisis in (Social) Psychological Theory
Elliot T. Berkman and Sylas M. Wilson
Kurt Lewin viewed practicality—the ability to speak to social issues—as a valuable characteristic of scientific theories. Nowadays, however, theories are mainly evaluated by how well they account for laboratory data, Berkman and Wilson observe. Despite some exceptions in clinical, intergroup, and health domains, most psychological theories lack relevance, accessibility, and applicability to society, according to the authors. They describe this practicality crisis and illustrate the use of practical theory in the field of self-regulation. They also suggest several incentives in publishing, academia, and research funding that could foster interest in practical theories.
Heterogeneity of Research Results: A New Perspective From Which to Assess and Promote Progress in Psychological Science
Audrey Helen Linden and Johannes Hönekopp
Heterogeneity occurs when multiple attempts to replicate research results produce results that vary more than expected from sampling error. Linden and Hönekopp explored heterogeneity in 150 meta-analyses from varied psychology fields and found it to be high, reflecting low coherence between the concepts researchers use and the observed data. However, replications that closely matched the original studies appeared to lead to moderate heterogeneity. Linden and Hönekopp discuss the implications of these findings for theory testing and suggest reducing heterogeneity to advance the understanding of psychological science and the design of practical applications.
Anatomy of a Psychological Theory: Integrating Construct-Validation and Computational-Modeling Methods to Advance Theorizing
Ivan Grahek, Mark Schaller, and Jennifer L. Tackett
Well-specified theories are as important for the development of reliable empirical psychological science as sound research methods and statistics. Grahek and colleagues discuss theory specification and development in two research traditions—computational modeling and construct validation. By identifying the commonalities and differences between theoretical reasoning in these two traditions, the authors propose an integrated method to develop psychological theories that can lead to better explanations and predictions. Grahek and colleagues also explore what a well-specified theory should contain and how researchers can question and revise such a theory.
The Role of Replication Studies in Theory Building
Elizabeth Irvine
Irvine uses work from philosophy of science to clarify the need for theory development in order to meaningfully carry out replication studies. Despite replication studies having been presented as a way of developing theory (by allowing researchers to update theoretical claims), Irvine shows that conceptual replications tend to offer little theoretical payoff. The author proposes viewing replication attempts as exploratory studies that researchers can use to improve the scope of theory and the accuracy of measurement procedures, thus to drive cumulative scientific progress.
Quo Vadis, Methodology? The Key Role of Manipulation Checks for Validity Control and Quality of Science
Klaus Fiedler, Linda McCaughey, and Johannes Prager
A proper manipulation check, which must be operationally independent of the dependent variable, ensures the intended purpose of an experiment’s manipulation. Manipulation checks are critical for the viability of a theoretical hypothesis’s logical premise and, therefore, for scientific quality control. Manipulation checks also contribute to clever research design, carry over to theorizing, and have implications for replication. Fiedler and colleagues propose a future methodology that replaces scrutiny in statistical significance testing (i.e., the p < .05) with validity control and diagnostic research designs.