Thursday, January 19, 2012
Reviewing Econometric Papers
Chris Blattman helpfully links to the syllabus for his course on research design and causal inference. At the end is a list of questions he thinks (and I agree) would provide a useful checklist for reviewers of econometric papers. Take a look at it.
Of course, me being me, I have some issues.
1. The overarching assumption is the framework of hypothesis testing. You have a model. The model generates a prediction. You see if the prediction is falsified by a dataset, properly analyzed.
This is often a reasonable way to go, but it should not be viewed as the only approach (although it typically is). Sometimes interesting hypotheses arise in the absence of a model. They could be hunches, or involve the effectiveness of interventions, or even questions about whether difficult-to-see events actually occurred. The first link, from model to hypothesis, is not made of iron. The second, from hypothesis to analysis, can also be violated fruitfully. Sometimes one should just wade into the data and see what patterns can be found. This is especially the case when you have rich data and weak theory, which is increasingly true in economics. Of course, there is also a Bayesian perspective which differs from Blattman’s; I particularly appreciate the Bayesian critique of the null hypothesis and its implications for the objective of falsification.
2. I think there should be systematic consideration of the power of empirical tests: how discriminating are they likely to be of false negatives and positives? There is a qualitative dimension to this assessment in most complex empirical contexts, but that is a reason for thinking through the matter openly and systematically, not avoiding it.
3. The issue of causal mechanisms is relegated to a single mention at the end, almost as an afterthought. My position, as I’ve argued elsewhere, is that mechanism is central to scientific explanation. A theory that posits a relationship between two sets of variables but doesn’t explain the process by which a change in one set generates a change in the other is simply incomplete. To put it differently, better a well-specified mechanism without prediction than a prediction without a mechanism. (Best is both, of course.) A given econometric exercise may not be the venue for specifying and estimating the parameters of a process, but (1) there needs to be at least the account of a process, and (2) thought should be given to how the process can be approached empirically. (Incidentally, this is one reason case studies are potentially much more powerful than most economists give them credit for: they have the potential to directly track processes that disappear in large-sample research.)
4. The topic of external consistency never appears at all. First the definition: external consistency means that claims (theories, predictions, empirical methods) be consistent with what is reliably known by other researchers. In practice, this means having a wide knowledge not only of your own field, but also any field that considers topics bearing on yours. Economists should be knowledgeable about the work of psychologists, for instance, or sociologists, or political scientists, or historians or anyone else who may have established results to which yours need to conform, or else need a good story. Often this means working backward: you begin with a claim in your own work and then imagine what stream of research in some other field might have implications for it. Then go look for this stream.
Do I need to mention that economists have had a poor track record on #4, and this is one reason they are not always held in such high repute by practitioners from other disciplines?