Mark Thoma has wisely directed us to a new paper by Stanford’s Paul Pfleiderer that makes the case for distinguishing between models that do and don’t have realistic assumptions. It’s a great read and makes a number of points I would heartily endorse.
Meanwhile, I’d like to add one more argument into the mix. I take it as axiomatic that the economic world is far too complex and variegated to be comprehended or forecasted by any single model. Sometimes one set of factors is paramount, and particular model captures its dynamic, and then another set takes over, and if you continue to follow the first model you’re toast. There are complicated times when you need a bunch of models all at once to make sense of what’s going on, even when they disagree with each another in certain respects.
So how do you know which model to use when? The answer has to be that you observe as carefully as you can the conditions that obtain right then and there to determine which are the most salient, and then you pick the model(s) that are best fitted, by their assumptions, to those conditions. This of course is precisely what me mean by the realism of assumptions: are they reflective of the reality to which they might be applied?
The doctrine that the realism of assumptions doesn’t matter could be defensible only in a world in which it is axiomatic that only a single model, the one that wins the empirical prediction game, will be used in all circumstances to the exclusion of all the rest. That axiom underlies the canonical Friedman formulation in particular. Pfleiderer’s contribution is to show that even this is not enough: given the limited power of real-world tests, some filtering of models according to their assumptions is mandatory.