Monday, March 5, 2012
The Real Problem with Microfoundations
Suppose you lived in a world in which there were two branches of economics, micro and macro. Microeconomics in this world is rigorous, precise, honed over many decades of increasingly sophisticated analysis, and confirmed in almost every empirical test. It is axiomatic, its propositions invested with mathematic certainty. It has proved its worth in one application after another. Macroeconomics, alas, is everything micro isn’t. The axiomatic structure is missing; much theoretical work is essentially ad hoc. Data are thinner and models are at risk of failing the first out-of-sample challenge. Even outright embarrassment is a continuing problem: leading macroeconomists often make predictions that are not simply wrong, but profoundly, cosmically wrong. It’s a crap shoot.
If this were your world, wouldn’t you want to base your macro on micro to the fullest possible extent? Bend and squeeze the good, rigorous stuff as far as you can, so you could minimize the use of the dubious macro ad hocery?
I think something like this is in the back of the minds of most macroeconomists. Optimization models, general equilibrium----these are things we know to be right and true, so the more we can use them to generate macro models, the more solid our ground. Arguments with more apparent technical content that are sometimes trotted out don’t really change the situation. Take the Lucas Critique: we need to take into account how people’s behavioral patterns will change in response to changes in policy. This is certainly true, but it is another matter entirely to put your faith in models of optimizing individuals (or clone armies) as the vehicle for understanding behavioral shifts. Whatever the purpose, If I believed in micro they way most economists do, I would be for microfoundations too.
Here’s the rub, though: micro is just as squishy, in its own way, as macro. The axiomatic architecture has nothing to do with science, elegant as it may appear to those who have devoted years of their life to mastering it. Yes, micro is absolutely internally consistent. So what?
Optimization is a formal technique, something you can do with a sufficiently specified utility mapping and choice set, but it is not descriptive of the actual behavior of individuals or organizations. Equilibrium can be identified in models constructed by economists, but there are few real-world markets in which stationarity is observed for very long. (Disequilibrium dynamics are observed, but adjustment is the ill-behaved child of microeconomics, the one who smashes the furniture and is sent to bed early so that proper equilibrium conditions can hold forth.) General equilibrium theory in particular is a dead-end project, useful only for establishing the myriad ways real world economies are incapable of achieving such timeless bliss. Their welfare properties do not even hold asymptotically: getting closer to a an equilibrium state (achieving equimarginal conditions in more markets) does not guarantee Pareto superiority over positions further away. And of course, the convenient construct of a representative agent has no justification whatsoever in microeconomic analysis.
Textbook micro should not be a basis for pre-certifying real micro modeling and empirical work, much less macro. In fact, economics does not offer any specifically “scientific” concept or method around which we should be compelled to standardize, which leaves us with no substitute for thinking through every significant problem from the ground up. Consider this not a loss but a gain. With less a priori baggage, macroeconomics might develop the habit of pruning in the face of disconfirmation, treating Type I error with the seriousness real scientific work demands. Hell, there can even be lots of serviceable micro within macro, but it will be empirically grounded and institutionally specific—finance, corporate investment, markets in real estate and consumer durables, stuff like that, populated by humans and not rational cyborgs.