By Megan McArdle / BLOOMBERG VIEW
WE know what happens when science is “politicized”: Think of global warming. Politicizing science leads both sides to retreat into bunkers, hurling insults at each other, and trying to cut each other off at the knees by any means necessary.
But what happens when science isn’t politicized? Part of the answer may be: the epidemic of replication failures we now seem to be seeing. A recent paper from the Federal Reserve argues that economics has problems similar to those recently found in psychology: A lot of research results are getting published, and a lot of the interesting findings can’t be replicated, often because key data or instructions aren’t available.
Now, that is not, by itself, necessarily a problem. As I’ve written before, “finding an interesting result that fails replication” is an important part of science. We should not expect every paper to get a replicable result, not even papers that are meticulously done to the highest research standards. The outliers, the coding errors, the unforeseen model weaknesses—these we will always have with us.
But “the authors did not provide enough data to replicate their work” is not a problem science should ever have; neither is “a weak result lived on in the literature for years before anyone tried to repeat it.” I read these papers about replication failure and think, “Aren’t scientists supposed to be competitive? Why aren’t these guys trying to destroy each other? Or at least provide a reality check? How has this gone on for so long? Why do so many journals allow authors to publish without providing the necessary tools to replicate their work?”
Of course, many scientists do some of this. But the recent spate of broad replication failures suggests that they’re not trying to do it nearly enough. And cases I can think of where the system worked are often political. Take Neumark and Wascher’s attack on Card and Krueger’s work on the minimum wage. The debate is hardly resolved. Partisans of both sides are still confidently declaring that the other side’s proposition about the minimum wage has been “debunked,” even as the research goes on. The debate has often been uncivil. But it is a robust debate in which scientists are hunting for problems in other scientists’ work.
Other relatively recent cases in economics include Levitt’s paper on abortion and crime, the housing-based critiques of Piketty’s book, and the coding error that was discovered in Rogoff and Reinhart’s work on debt dynamics. We might wish that the volume of the debate was turned down a notch. But at least there’s serious science happening—and even better, that science is making its way to the news media and the public.
In too many cases, this does not seem to be happening. One can cite any number of reasons for this: to get tenure, and grants, one needs publications, and it is hard to get published if you’re replicating a previous study; meticulously replicating someone else’s work isn’t nearly as much fun as designing your own research; people who invest a lot of time and effort in developing a data set aren’t eager to share it so that far-flung researchers can free ride on their work. But I’d like to advance another issue now being aired by the folks at Heterodox Academy: politically, science is becoming narrower, and that is making science weaker.
A few years back, a friend who is a securities lawyer, and therefore very interested in books on the financial crisis, asked me a very good question: How does journalism guard against the possibility of false facts entering the data stream? These tomes are extensively reported, and each has its nuggets of new information gleaned from many hundreds of hours of interviews. Often interview subjects are hard to get to sit down, much less to go on the record. What happens if those interviews yield false information?
Journalists do, of course, attempt to guard against that sort of thing, for example by getting multiple sources. But we also get things wrong sometimes. And it would be folly to think that these errors are always exposed. When they are not, these “facts” get repeated until they are heard as facts.
There is one area, however, where a robust response is guaranteed, and that’s in politics. Publish something that makes one side of the political spectrum look bad, and you can be sure that the next day, there will be hordes of interns, reporters and political staffers devoted to exposing every last weakness in the argument. Had Rogoff and Reinhart published a few years earlier, it seems unlikely that they would have attracted the level of attention that they did from outside the slightly stuffy world of international public finance wonks. As it was, their work became the focus of a heated debate over stimulus, government spending and deficits—and their coding error quickly became big news.
When almost everyone in your field leans toward one side of the political spectrum, that reaction—that teeth-grinding, hair-pulling, eye-rolling “That can’t be right!” —gets blunted.
Of course, it’s no fun having your work under attack by political partisans. I know: I’ve spent the last 15 years of my career in the fray, knowing that much of what I publish is going to get someone’s blood boiling, and their eyes scanning for mistakes.
And yet, for the profession as a whole, this is a good thing. It makes us more careful, and more important, it means that our inevitable errors are not immortal. Journalism has a lot of ways to protect against errors before publication, and it needs them all. Journalism also benefits from the hordes of folks who check us after we’ve done every check we can think of—because the cognitive biases to which all humans are prey mean that there are probably some checks we couldn’t think of.
Similarly, science is going to need to do something about publication bias, and by extension, about the way that tenure and research funding are handed out. A new study intended to test a past conclusion is not unoriginal; it’s essential. We should respect and demand that kind of rigor across the sciences, not only for politicized topics.