Letter/Observer Forum
IRBs Should Not Be ‘Research Design Police’
I am deeply concerned about the letters of Christine Hansen (“Limitations of IRB Expertise,” Observer, April 2002) and Harold Stanislaw (“IRBs Must Understand Psychological Science,” Observer, April 2002) to a letter by John Furedy (“IRBs: Ethics, Yes-epistemology, No,” Observer, February 2002).
I believe these two responses put us on a slippery slope to violations of academic freedom and freedom of inquiry, and to scientific oppression. It is to society’s and science’s benefit to allow as much openness in inquiry as possible. It is often difficult to know at a given moment which research study or research methodology might turn out to offer substantial new benefits. When IRBs conclude that a research study is worthless because of their perceptions that there are design flaws they are risking an imposition of their paradigmatic biases on the freedom of inquiry. In so doing they are also risking nullifying a study which might produce something new or a novel finding, even with what they perceive as “fatal” design flaws. I agree that when there are substantial risks to participants that we will probably have to try to make some judgment of the potential usefulness of the research, even given the danger posed by so doing. My concern is at the other end of the spectrum where the risks are minimal. Suggestions that it is “probably unethical” to “inconvenience” participants border on a direct interference with the freedom of inquiry, in my opinion.
The problem is that judging what is an “obvious flaw” and/or the usefulness of the research is simply not that easy, especially now that qualitative and postpositivistic research paradigms are becoming increasingly accepted. It is a cliché that every methodology has its limitations. I have seen far too many changes in perceptions of what is “valuable research” and what is “fatally flawed research” in my 40 years in the field to be sanguine about the reliability of such judgments (parenthetically, the notorious cases of previously accepted articles resubmitted to the same journals and then rejected, and the unreliability of journal reviewer judgments speaks to this also).
When I was in graduate school 40 years ago the dominant research paradigm in both social psychology and in psychotherapy was to do highly artificial laboratory studies where all the variables could presumably be controlled. Naturalistic studies were viewed with great suspicion. Accordingly, my dissertation was a therapy analogue. Now the situation is just the reverse. Therapy analogue studies are seen as a waste of time, and studies must be “ecologically valid.” When I was a graduate student a proposal of a qualitative study would have been greeted with laughter. Qualitative methodologies are still controversial in psychology. But outside of psychology they are well respected. The point is that judging the “worth” or usefulness of a research study based on who thinks what methodology is “flawed” or not is a dangerous and slippery slope.
Stanislaw tries to reassure us that at his university there are several IRBs and that the one that evaluates psychological research is composed of PhD psychologists, and Hansen argues that IRB members need to know the limits of their own expertise and consult when faced with methods they are not familiar with. I wish such facile suggestions really addressed the issue. We are usually blind to our own paradigmatic assumptions and biases-what seems like a complete and obvious flaw to one person might be a research strength to another-and it is far too easy for our paradigmatic biases to lead us to see “flaws” and then, believing we in possession of “the truth,” decide a study is “worthless.”
I have two concrete examples. I recently presented a qualitative method I have been working on for assessing psychotherapy outcome at two conferences – the 2000 meeting of the Society for Psychotherapy Research, and APA. It was well received at both conferences. I received positive feedback from several eminent psychotherapy researchers. Subsequently I gave my paper to a colleague to read. This colleague was not well versed in qualitative methods. Nor was he sympathetic to the underlying epistemology. He wrote back a lengthy critique of all the “flaws” in the study-small n, no control group, etc. He questioned if anything of use could be learned from such a flawed study. Had he been on an IRB board, had that board adopted the philosophy advocated by Stanislaw and Hansen, and had this been a proposal, I don’t think he would have approved it. I am not sanguine that he necessarily would have recognized the limits of his expertise and sought outside consultation.
Similarly, a proposal was recently submitted to an IRB at a colleague’s university (this is disguised, but the facts are correct). The colleague wanted to do a study using an “empathic” interview technique, an approach that has intellectual currency in some nontraditional, postmodern circle. The IRB, thinking in traditional terms, rejected this aspect of the design because it “biased” the interview procedure, apparently unaware that this procedure has intellectual advocates.
IRBs should restrict themselves to the worthy goal of judging risk to participants and helping researchers minimize risks, and stay out as much as possible of the business of judging the merits of research design. Otherwise they themselves risk engaging in the ethically questionable behavior of interfering with freedom of inquiry. Accordingly, they should focus on real risks and not trumped up ones like “wasting participant’s time.” If the study involves substantial risk, then the IRB will have to weigh risk to participants against their best guess as to the potential value of the research. In making this judgment they will have to take the chance that they might interfere with freedom of inquiry and possible scientific progress in order to protect the participant. This is as it should be: Participant welfare must come first. At the same time, if risk is minimal, IRBs should stay out of the business of making judgments about research design entirely (this does not mean they cannot give feedback, just that their decisions should not be based on that).
Finally, the suggestion of leaving issues of research design to the judgment of journals is an excellent one. By the time an article reaches a journal the whole package of rationale, methodology, actual results, and conclusions can be judged. It is a cliché that every research project has flaws. By the time a project reaches a journal reviewers are in a better position to weigh merits of the results against flaws in the design. Such judgments should not be made speculatively (in essence) in advance. We do not need “research design police.”
Saybrook Graduate School & California State University Dominguez Hills
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.