12/21/2012

The myth of the "myth of self-correcting science": Inaccurate, illogical article reveals popular misconceptions about how science works

This morning, I read the dumbest news article I've read in recent memory.  It (unknowingly) contradicts its own conclusions so many times it could read as a parody of a news story.

The article, titled "The Myth of Self-Correcting Science," attempted to argue that a wave of high-profile cases of fraud in psychology research not only shattered researchers' "faith in science," but prove that science is not, in fact, a self-correcting discipline.  Yes, he really does say these scandals were "salt in the wounds for students and colleagues still recovering from shattered reputations and a shaken faith in science."  As a psychology research assistant in constant communication with researchers at all career levels, I have yet to encounter someone whose "faith in science" was shaken by these scandals.  The article's conclusion doesn't logically follow, and the author fails to make a strong case.  He simply makes assertions, often buried in long sentences as above, as if hoping we won't notice--a common journalistic trick (see, I can use this technique too). 

The author has no idea what "self-correcting" actually means.  "Self-correcting" doesn't mean perfect, as he seems to believe.  The very fact that correction occurs means that there must be something to correct.  Even a "self-correcting" science has sloppiness, mistakes, even outright fraud.  What makes a discipline "self-correcting" is how it reacts.  A non-self-correcting discipline will reject the criticism defensively, or ignore it.  Ultimately, it concerns itself with immediate damage control.  A self-correcting discipline may contain some members who care only about their own careers, but collectively, its members seek to improve the entire discipline.  Many members of a self-correcting discipline will ask, "how could our discipline have allowed such fraud to get published, and how can we prevent or detect it better in the future?"  This self-examination will extend beyond containing fallout from a specific scandal into practices that have not yet become public knowledge.  A self-correcting discipline will reject fraudulent papers from journals and investigate faculty members responsible.  By strongly repudiating the actions and punishing those who commit them, the discipline will prevent many (but not all) such mistakes from happening in the future.  

As the article reveals, scientists have done all of these things.

Those familiar with the scandals know that both Mark Hauser and Diederik Stapel have been under investigation.  In fact, the final report on Stapel's case just came out last month.  Both have already faced immense blows to their previously glowing reputations, and will probably face more serious consequences as well.  A non-self-correcting discipline would have moved more slowly in investigating these men, and the scandal would not have blown up to credibility-destroying proportions.  Retractions also occur frequently; an anesthesiologist named Yoshitaka Fujii has had 172 retractions.

In the wake of these scandals, researchers turned attention to not only serious misconduct like Hauser's and Stapel's, but everyday questionable research practices.  Leslie John and Joseph Simmons separately studied the much less serious practices of post-hoc theorizing and "data fishing."  Post-hoc theorizing means creating or revising your hypothesis after you've collected the data so that it fits your results, and  data-fishing means checking the data after every participant to see if the results come out significant and stopping as soon as they do (practically guaranteeing a study significant results).  The impulse to prevent future scandals did not stop with the specific misconduct committed by Hauser and Stapel, but to the everyday "cheating" committed by countless researchers every day--what could be more self-correcting than that?   

"Clearer identification of the problems associated with some research practices is incredibly helpful," writes Linda Skitka, who sits on numerous journal editorial boards. "Because I'm guessing at least some scholars who engaged in questionable practices did not recognize the full implications of doing so. Given the intense attention these issues are now getting in the field, they certainly know better now."

Intense attention?  Does this sound like a discipline that turns a blind eye to misconduct, or one that works hard to prevent it?  This quote doesn't come from some nobody, either.  Because Linda Skitka sits on numerous journal editorial boards, she has significant power to decide which future research gets published and which rejected.  Her editorial choices can significantly improve or worsen the state of published science in the research areas her journals represent.

Neither is Linda Skitka the only scientist eager to reform scientific research from within.  The article also describes a key reform figure, Brian Nosek, who works on the Open Science Framework and the Reproducibility Project:

 "[Nosek's] professional commitment to ferreting out injustice and implicit bias...would seem to undergird a life-long fixation with good and evil. He's the kind of man you can see investing considerable amounts of time and energy trying to save science from its own dark side."
Nosek himself is a scientific researcher, as is Nobel Laureate Daniel Kahneman, also an advocate for reform.  What can one call scientists "trying to save science from its own dark side" other than self-correcting?

The author's second misconception concerns how "self-correction" actually works.  He seems to assume that the researchers doing the correction are the same ones engaging in misconduct, an extremely naive idea.  He writes that 
"underlings and younger scientists were often at the forefront of reform, trying to convince their elders to take the problem more seriously. The old guard tends to claim that critiques are overblown, that outside reforms and practices will hinder or hurt science, and that science is a self-correcting process. The new guard tends to embrace transparency and openness, seeing reform as the best way to salvage damaged reputations and keep the field from falling into disrepute. Incidentally, most of the recent fraud cases were unearthed by whistle-blowers (usually graduate and undergraduate students) working within the lab or... Uri Simonsohn. But none were revealed by the "self-correcting process of science."
How, exactly, is the "self-correcting process of science" supposed to work, then?  However young the students may have been, they were still scientists working in the field.  In any discipline, whistle-blowers are likely to be those without reputations on the line, who have the most to gain and the least to lose.  That does not mean that their efforts don't come from the discipline (the "self" part) or don't help fix it (the "correction" part).

Even if this point made any sense, the author himself undercuts it by describing reformers like Nobel laureate Daniel Kahneman, clearly no "underling" trying to make a career through whistle-blowing.

Lastly, the author fails to put the issue of self-correction in science in a broader context by comparing the sciences to other academic research disciplines.  He quotes Nosek at length on the subjectivity involved in defining crucial psychological concepts like "intelligence" or "morality," clearly trying to make the case that science is by its nature vulnerable to fraud because one can define key ideas in self-serving ways.  
The social sciences don't have the luxury of physical object variables like frogs; the components of studies are often more abstract concepts like morality or intelligence. "There's no such thing as 'frogishness,'" sighs Nosek, addressing the issue. "Well," he recants, ever the scientist, "I suppose you could have differing degrees of frogishness; but basically, everyone agrees on what a frog is."  People have different concepts of what intelligence is. "There are more and less useful ways of trying to define these things," says Nosek. But basically, the intellectual subjectivity inherent in the social sciences leaves more room for self-serving interpretation of the data than with hard variables. "When you're operating on the frontiers of what is known, you're going to make mistakes," Nosek explains. 
Well, of course.  But psychological science--or any life science, really* has much less of this problem than any other discipline except for chemistry, physics, and engineering.  Where is the concern about self-correction in literature, philosophy, or history?   Should economists and political scientists start doing mea culpas?  What makes even messy sciences like psychology science is the awareness of conceptual fuzziness and the insistence on attempting to overcome it--however impossible it may be to do so perfectly.  The very fact that Nosek brings up the fuzziness of concepts as a problem itself indicates the self-improving, self-correcting attitude of working scientists.

In short, the substance of this article undercuts its claims.  

Why bother critiquing a single article like this?  I think the conceptual errors here reflect how a number of people--and not only journalists--(mis)understand science.  Certainly, articles like this confirm such people in their errors and give them ammunition in the form of reputable news magazines to cite and links to send to their friends.  When the author says researchers' "faith in science" was shaken, he wasn't talking about them, he was talking about himself.  I think he had an unrealistic image of science as some sort of perfect discipline, not as an imperfect discipline staffed by fallible humans that heroically tries to improve itself a bit at a time.  Then news of big scandals and small questionable research practices knocked science off this pedestal.  When people expect science to be perfect, and then discover that it isn't, they turn against it in an emotional way and decide that science and its conclusions can't be trusted.  While science isn't perfect--Newton's laws don't describe every situation and the theory of evolution has been expanded and changed significantly since Darwin's day--it's still the closest thing we have to knowing what's true.   To reject science or scientific findings for their imperfections without asking where we should turn for a replacement would be irresponsible.  

*If you think the biological sciences are objective, try defining "health" or "improvement" in a drug research context.

No comments:

Post a Comment