Tuesday, April 16, 2013
Ken Rogoff and Carmen Reinhart, Meet Marc Hauser
In violation of the solidarity that ought to exist between ex-chessplayers (albeit at very different levels of achievement), when I hear about the errors and dubious data selection and modeling choices of Rogoff and Reinhart, my thoughts turn to Marc Hauser.
Hauser, you may recall, was a leading evolutionary biologist and something of a pop culture star, who cranked out high profile papers from his lab at Harvard. Eventually it was determined that he had systematically doctored evidence in order to produce the results that fit the theory he was peddling. There was a formal inquiry, and he left the institution in disgrace.
I bear no personal animosity toward Hauser. For all I know he may have drifted into the methods that got him into trouble a little at a time, and he may have seen each step as temporary, small and excusable. A science, however, has to be hard-nosed about this. As I’ve argued elsewhere (here, here and here), if there’s a core feature that distinguishes science from other human activities and renders it historically progressive (it gets better over time), it is the enormous weight sciences place on Type I error, the risk of false positives, compared to Type II. This enables a division of labor to flourish, since specialists can rely on the carefully vetted findings of other specialists. It also means that there will be rarely a step backward, a shift of a scientific field toward greater error over time from which it later has to retreat. You can’t say this about poetry, politics or palmistry.
Or economics. As far as I can tell, there are no serious consequences for economists who commit egregious Type I errors. They get to go on with their careers, and everyone just shrugs it off. No Marc Hausers here, folks.
So much the worse for us. Naturally, in applied work the risks of false positives and negatives have to be balanced by their respective costs in good Bayesian fashion, and we can live with a stream of Type I errors if that minimizes the overall cost of poor decision-making. But in matters of theory, where basic economic relationships are putatively identified, there should be no excuse for errors of the sort R&R appear to have committed. They were made more culpable by the pair’s long, long delay in releasing their source data.
These economists deserve a formal review—just like Marc Hauser.
Subscribe to:
Post Comments (Atom)
6 comments:
THe broader point about science being about a unique aversion to Type I errors is really important and I haven't seen it made elsewhere. I'd love to see you make it at more length than a blogposts (or three). Ever thought about developing it into an article of some kind?
Anyway, nice to see our alma mater getting some deserved attention.
Other than the cited piece for PAE, no I haven't written it up. Alas, I am sitting on scads of articles that beg to be written. Maybe when the books are out....
Incidentally, that's quite a scalp for grad student Thomas Herndon, who's a graduate of Evergreen, where I teach. Don't mess with those Greeners.
Peter,
What is the role of the "location of publication" in regards to this issue? Where have R&R presented the core of their erroneous research? Has the economics profession no means by which a peer review process takes place both before and following the publication of such research?
Do you suppose that the access to external sources of income has something to do with the potential for Type I errors in support of questionable, but potentially lucrative,
hypothesis?
One more thought. It seems to me that Sandwichman's post, "Piestein", goes a long way in explaining phenomenon such as R&R's misuse of scientific publication.
Jack, my purpose was to suggest that the problems exemplified by R&R go beyond the small subsets that have been highlighted -- articles published in the Papers & Proceedings issue of the AER (theirs was) which are not peer reviewed, articles that are seized on by conservatives and financial interests, etc. All of these things played a role, but there is a more pervasive issue, IMO.
There isn't a culture of transparency and replication in economics, not yet at least. Worse, there is no real respect for Type I error minimization in the field's research and publication protocols. For instance, you can run zillions of regressions and report only the ones that "worked". You don't have to say anything about the power of your empirical test, e.g. how well it distinguishes between your hypothesis and competing hypotheses. (It's optional and you can choose which competitors to mention.) You never have to spell out precisely all the assumptions on which your empirical work is conditional. (Reviewers may needle you about the assumptions that are their particular bugaboos.) And if you pin your reputation to a result, and the result is subsequently discredited because of errors on your part, there are at most transitory reputational consequences. It all adds up to disdain for the importance of distinguishing between things you suspect might be true and things you can be confident are true, which is fundamental to any true science.
"....which is fundamental to any true science."
And therein lies the rub. So long as words can be transmuted to a financial value there will always be those who understand how to make hay in the absence of sunshine. Too much is made of their proclamations and too little is required of their proofs. Science is the wrong word if this is the case. Philosophical debate is a poor substitute, especially given that ideology often directs the intentions of a philosopher. If there is so little consequence for the error and so much reward for the intention the debate will be forfeited to the the deepest pocket rather than the greatest truth. Too bad.
And the problem is not exclusively the intrusion of the "stink" tank phenomenon. Check the governing boards of nearly all well regarded universities. Over run by the business community and intent on assuring a general adherence to an ideology rather than a search for truth.
Post a Comment