Tuesday, February 2, 2016

Dueling Economic Models and How to Score Them


An article in today’s New York Times compares two wildly different assessments of the proposed Trans-Pacific Partnership trade and investment deal, one by the Peterson Institute, a Washington think tank financed by business interests, and the other by the Global Development and Environment Institute (GDAE) of Tufts University.  The Peterson people tell us their model predicts income gains from TPP; GDAE’s model predicts losses.  The article is strictly he said, she said.

How should economists present their modeling work to the public?  And how should journalists report it?  The current dispute falls well short of best practice.  Here’s how I think it should go:

Modelers should list all the key assumptions embodied in their models.  In order to generate predictions, any model has to hold certain parameters constant; the technical term is “closing the model”.  (It’s because nothing is held constant in real life that prediction is so dicey.)  The results depend on which parameters are fixed in advance and how they’re fixed.  Reasonable people can disagree about how to do this, but there’s no way to discuss it unless the assumptions are presented openly.

Here’s an example.  Peterson uses the GTAP model (Global Trade Analysis Project) of Purdue, which I briefly discuss in my micro text in the chapter on general equilibrium theory.  This model assumes full employment and holds trade balances fixed, so that a trade deal shock is not permitted to change any country’s current account or unemployment rate.  It is erroneous, then, for the Times report to state that the Peterson economists “concluded” that “there would be no net change in overall employment in the United States.”  That’s not a conclusion—that’s an assumption.  There’s a big difference.  (Dean Baker critiques this assumption over on his soapbox.)

The GTAP model also assumes the optimality of market equilibration, so that any impediment to trade is necessarily welfare-reducing; the only question is how much, which is precisely what the model is designed to estimate.  Meanwhile GDAE does not make this assumption but is concerned instead with how a trade deal such as TPP will alter trade balances, which are not assumed to be fixed.

The way it should work is that each team, in presenting its results, would list all their key assumptions.  Journalists would translate these lists into terms that could be understood by their readers.  Then all of us could have an intelligent discussion about which set of assumptions is more appropriate to the questions we care about.

Second, economic models like GTAP and GDAE’s Global Policy Model are typically employed over and over.  They have track records.  Journalists should be able to review their prior predictions and tell readers how well they fared.  For instance, GTAP has been around for decades.  How well did it do in predicting the outcomes of past trade agreements or exchange rate adjustments?  Did it tell us anything useful in advance about China’s accession to the WTO?  And how has GDAE’s model performed?

The he said, she said approach is now recognized as unacceptable in reporting on climate change and other topics where the weight of evidence is crucial.  Economics shouldn’t be an exception.

2 comments:

Thornton Hall said...

Hoe actual models work (and improve):
http://www.nytimes.com/2016/02/02/science/where-el-nino-weather-begins-pacific-ocean-noaa.html

Jack said...

Here's an algorithmic model that economists would do well to take into account when drawing conclusions from their own models. (Assumption)x (Assumption)= Square root of Absurdity. Sorry, but my lap top doesn't have a means for typing mathematical symbols, but you get the point. A conclusion is only as good as the assumptions it is drawn from. And the more assumptions used as the basis for any conclusion, the further from objectivity do those conclusions drift.