Friday, October 10, 2014

A throw of the D.I.C.E. will never abolish chance

In Building a Green Economy, NYT Magazine, April 7, 2010, Paul Krugman wrote,
As Harvard’s Martin Weitzman has argued in several influential papers, if there is a significant chance of utter catastrophe, that chance — rather than what is most likely to happen — should dominate cost-benefit calculations. And utter catastrophe does look like a realistic possibility, even if it is not the most likely outcome. 
Weitzman argues — and I agree — that this risk of catastrophe, rather than the details of cost-benefit calculations, makes the most powerful case for strong climate policy. Current projections of global warming in the absence of action are just too close to the kinds of numbers associated with doomsday scenarios. It would be irresponsible — it’s tempting to say criminally irresponsible — not to step back from what could all too easily turn out to be the edge of a cliff.
... 
So what I end up with is basically Martin Weitzman’s argument: it’s the nonnegligible probability of utter disaster that should dominate our policy analysis. And that argues for aggressive moves to curb emissions, soon.
So far, so good. But Krugman's conclusion produced this pretzel of cognitive dissonance:
...there has to be a real chance that political support for action on climate change will revive. 
If it does, the economic analysis will be ready [no, it isn't]. We know how to limit greenhouse-gas emissions [no, we don't]. We have a good sense of the costs [nope] — and they’re manageable [how could we know?]. All we need now is the political will.
Krugman apparently assumed that the cost estimates developed, for example, in Nordhaus's "dynamic integrated climate-economy" (DICE) analyses are independent of when the greenhouse gas abatement actions are taken. But the rationale for delaying abatement is that the discount rate assumed in the model makes it cheaper to wait to do the abatement. Weitzman's critique doesn't present cost estimates. Contra Krugman, there is not even a consensus about what needs to be done or how to do it, let alone "a good sense of the costs" of doing... it? Whatever "it" is.

The bottom line (literally) is that a key consideration of the structure and assumptions of the conventional models was facilitating economic growth and relying on that economic growth to finance the costs of abatement. The DICE were loaded for growth! To put it somewhat crudely, delaying abatement was supposed to make a large part of the cost "pay for itself" through the dividends earned on the money saved by not doing it now. You cannot have your cake and eat it too. Nor can you finance a current expenditure from revenues you would have earned if you hadn't made the expenditure.

The excerpts and abstracts below are not from people Krugman would ridicule as "degrowthers." There seems to be a dawning awareness that the assumptions of the conventional integrated assessment models need to be, at the very least, radically revised, which is essentially the point those silly anti-capitalist degrowthers on the left (going back to that silly anti-capitalist leftist Nicolaus Georgescu-Roegen)  have been making all along.

"Climate Change Policy: What Do the Models Tell Us?" Robert S. Pindyck
Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g., the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.
"Climate Policy: Science, Economics, and Extremes," Anthony C. Fisher and Phu V. Le
Economic models help illustrate the links between the climate and the economy, and they are an important component of the multidisciplinary analysis that is needed to address climate change. However, there are major problems with the estimates of potential damages in the IAMs... First, damage functions and estimates appear to have little connection to the empirical findings from econometric studies of sectoral impacts, particularly on agriculture, as we discuss later. More generally, economy-wide damage functions are simply not known, especially at the global level. Thus, as Pindyck (2013b) argues, there is little empirical, or for that matter theoretical, foundation for the specification of functional forms and parameters in the models. This suggests that their quantitative results and policy prescriptions are somewhat arbitrary. 
We agree with Stern (2013) that there are gross underestimations of damages in economic impact models and IAMs, and we discuss some additional issues that are not adequately addressed in the models including the importance of nonlinearities, environmental impacts, extreme events, and capital losses.
"Endogenous growth, convexity of damages and climate risk: how Nordhaus' framework supports deep cuts in carbon emissions." Simon Dietz and Nicholas Stern
'To slow or not to slow' (Nordhaus, 1991) was the first economic appraisal of greenhouse gas emissions abatement and founded a large literature on a topic of great, worldwide importance. In this paper we offer our assessment of the original article and trace its legacy, in particular Nordhaus' later series of 'DICE' models. From this work many have drawn the conclusion that an efficient global emissions abatement policy comprises modest and modestly increasing controls. On the contrary, we use DICE itself to provide an initial illustration that, if the analysis is extended to take more strongly into account three essential elements of the climate problem -- the endogeneity of growth, the convexity of damages, and climate risk -- optimal policy comprises strong controls. To focus on these features and facilitate comparison with Nordhaus' work, all of the analysis is conducted with a high pure-time discount rate, notwithstanding its problematic ethical foundations. [We have argued elsewhere that careful scrutiny of the ethical issues around pure-time discounting points to lower values than are commonly assumed (usually with little serious discussion).]
"Climate Risks and Carbon Prices: Revising the Social Cost of Carbon," Frank Ackerman and Elizabeth A. Stanton
Once the social cost of carbon is high enough to justify maximum feasible abatement in cost-benefit terms, then cost-benefit analysis becomes functionally equivalent to a precautionary approach to carbon emissions. All that remains for economic analysis of climate policy is to determine the cost-minimizing strategy for eliminating emissions as quickly as possible. This occurs because the marginal damages from emissions have become so large; the uncertainties explored in our analysis, regarding damages and climate sensitivity, imply that the marginal damage curve could turn nearly vertical at some point, representing a catastrophic or discontinuous change.  
The factors driving this result are uncertainties, not known facts. We cannot know in advance how large climate damages, or climate sensitivity, will turn out to be. The argument is analogous to the case for buying insurance: it is the prudent choice, not because we are sure that catastrophe will occur, but because we cannot be sufficiently sure that it will not occur. By the time we know what climate sensitivity and high-temperature damages turn out to be, it will be much too late to do anything about it. The analysis here demonstrates that plausible values for key uncertainties imply catastrophically large values of the social cost of carbon.

Our results offer a new way to make sense of the puzzling finding by Martin Weitzman: his “dismal theorem” establishes that under certain assumptions, the marginal benefit of emission reduction could literally be infinite (Weitzman 2009). The social cost of carbon, which measures the marginal benefit of emission reduction, is not an observable price in any actual market. Rather, it is a shadow price, deduced from an analysis of climate dynamics and economic impacts. Its only meaning is as a guide to welfare calculations; we can obtain a more accurate understanding of the welfare consequences of policy choices by incorporating that shadow price for emissions.

Limits to Growth One More Time

Not much time this morning, but I’d like to respond to the dustup between Mark Buchanan and Paul Krugman over whether energy (and other resource) use can be decoupled from GDP growth.

1. I get the impression that Buchanan identifies GDP with “stuff”, at least subconsciously.  But GDP is value, what people are willing to pay for.  When I teach an econ class, that’s a component of GDP.  If an extra student shows up, that’s GDP growth.  Of course, a lot of GDP really is stuff, but as economies develop they tend to become less stuffy.  Overall not enough, but how unstuffy they could become is an empirical, not a theoretical matter.

2. But I agree with Buchanan that increases in energy efficiency alone are unlikely to accomplish what we need to contain climate change.  Absent changes on other fronts, the growth of demand for energy services will simply swamp the effect of greater efficiency.  This has been true in the past and any realistic projection puts it in our future as well.

3. And Buchanan is also right that there is no historical precedent for the kind of decoupling between economic growth and fossil fuel use (let’s be specific here) that we would need to meet both economic and climate goals.  The notion of foregoing most of our remaining supplies of extremely energy-dense minerals flies in the face of all of human history.  That’s why it will be a big challenge to bring it off.  The challenge begins with putting in place a policy that prohibits most fossil fuel development.  You can discuss the particulars, but there is no getting around the need for a binding constraint.  The reason is exactly the one that Buchanan pinpoints: increases in efficiency and even increases in renewable energy sources alone will not be sufficient by themselves to offset the energy demands stemming from global GDP growth.

Can we compel most fossil fuels to stay in the ground and still have economic growth?  Since this is about growth in value and not necessarily stuff, the answer still seems to be yes.  But we won’t have a sufficient shift away from stuffiness without measures that prohibit dangerous levels of fossil fuel extraction.  An unprecedented change in the trajectory of economic growth requires unprecedented policies.

Postscript: I’m not interested in whether “unlimited” economic growth at some distant future date is incompatible with resource constraints.  It’s also true that economic growth can’t continue after the universe collapses in on itself.  We’ll let distant future people, if they still exist, worry about this.

Be More Like Us

In this morning’s New York Times I read:
Wolfgang Schäuble, the German finance minister, speaking in Washington on Thursday, insisted that “writing checks” was no way for the eurozone to increase growth, according to Reuters. Mr. Schäuble urged France and Italy to do more to overhaul their economies instead.
Yes, the first priority is for the rest of the Eurozone to be more like Germany.  They can begin by requiring firms to belong to industry associations that tax their members to finance apprenticeship and other programs, putting worker representatives on firms’ supervisory bodies and transferring the majority of banking assets to noncommercial (public and cooperative) institutions.  That’s Schäuble’s plan, right?

Thursday, October 9, 2014

Slow steam, fat tails and a dismal theorem

Paul Krugman's column the other day, invoking William Nordhaus's "demolition" of the forecasting model in The Limits to Growth, got me wondering about how well Nordhaus's indictment has stood up over the years. So I started poking around in the archives.

Twenty years after publication of Limits to Growth, the research team reconvened in 1992 for Beyond the Limits, an  update of the earlier analysis. Nordhaus, too, followed up his earlier "blistering" review with a critique of the second version. This second review was more conciliatory, albeit still critical of the Limits to Growth and Beyond the Limits assumptions and conclusions:
While the LTG school argued that economic decline was inevitable and economists argued that the LTG argument was fallacious, the argument is ultimately an empirical matter. Put differently, critics would have gone too far had they claimed that the postulated pessimistic scenario could not hold.
Instead of simply "demolishing" the LTG model, in his second review, Nordhaus responded with his own simple model, using more a conventional generalized Cobb-Douglas production function.
Like LTG models, the general model given in the last section shows the tendency toward economic decline. In addition, there are no less than four conditions, each of which is satisfied in the LTG model, that will lead to ultimate economic stagnation, decline, or collapse...  
[However]...the entire argument can be reversed with a simple change in the specification of the model; more precisely, I will introduce technological change into the production structure and assume that the Cobb-Douglas production function accurately represents the technological possibilities for substitution.
"Ultimately, then," Nordhaus concluded his discussion of simple growth models,
...the debate about future of economic growth is an empirical one, and resolving the debate will require analysts to examine fundamental structural parameters of the economy... How large are the drags from natural resources and land? What is the quantitative relationship between technological change and the resource-land drag? How does human population growth behave as incomes rise? How much substitution is possible between labor and capital on the one hand, and scarce natural resources, land, and pollution abatement on the other? These are empirical questions that cannot be settled solely by theorizing.
One of the discussants for Nordhaus's 1992 Brookings paper was Martin Weitzman, who described it as "an outstanding paper." that "represents the economic state of the art, circa 1992, in dealing seriously and honestly with the major limits-to-growth arguments." One could almost imagine hearing the scalpel being quietly honed as Weitzman administered that subtle anesthesia.

Fast forward another two decades and it is Nordhaus's turn to comment on a paper by Weitzman, "On modeling and interpreting the economics of catastrophic climate change."
In an important paper, Weitzman (2009) has proposed what he calls a dismal theorem. He summarizes the theorem as follows: "[T]he catastrophe-insurance aspect of such a fat-tailed unlimited-exposure situation, which can never be fully learned away, can dominate the social-discounting aspect, the pure-risk aspect, and the consumption-smoothing aspect." The general idea is that under limited conditions concerning the structure of uncertainty and societal preferences, the expected loss from certain risks such as climate change is infinite and that standard economic analysis cannot be applied.
Nordhaus concluded his discussion of Weitzman's theorem on a somber and humble note:
In many cases, the data speak softly or not at all about the likelihood of extreme events. This means that reasonable people may have quite different views about the likelihood of extreme events, such as the catastrophic outcomes of climate change, and that there are no data to adjudicate such disputes. This humbling thought applies more broadly, however, as there are indeed deep uncertainties about virtually every issue that humanity faces, and the only way these uncertainties can be resolved is through continued careful consideration and analysis of all data and theories.
The word "growth" doesn't appear in Nordhaus's 16-page commentary. Pascal's wager, anyone?



Tuesday, October 7, 2014

Krugman, straw man, beggar, thief

In his "Slow Steaming..." column today, Paul Krugman blatantly misrepresented and trivialized Mark Buchanan's argument:
Buchanan says that it’s not possible to have something bigger — which is apparently what he thinks economic growth has to mean — without using more energy.
Wrong. Buchanan said bigger things, as a rule, use more energy. He also said that efficiencies of scale don't overturn that rule. He referred to data about economic growth and energy use, not "what he thinks economic growth has to mean":
Data from more than 200 nations from 1980 to 2003 fit a consistent pattern: On average, energy use increases about 70 percent every time economic output doubles. This is consistent with other things we know from biology. Bigger organisms as a rule use energy more efficiently than small ones do, yet they use more energy overall. The same goes for cities. Efficiencies of scale are never powerful enough to make bigger things use less energy.
Krugman disparaged Buchanan for reprising the claims and title of the book that William Nordhaus "demolished so effectively forty years ago":
...not only does he make the usual blithe claims about what economists never think about; even his title is almost exactly the same as the classic (in the sense of classically foolish) Jay Forrester book that my old mentor, Bill Nordhaus, demolished so effectively forty years ago.
I'll leave it to posterity whether or not Nordhaus "demolished" Limits to Growth. Forty years on, Brian Hayes wrote a less "blistering" critique, published in American Scientist, judging Forrester's mathematical model to be "more of a polemical tool than a scientific instrument" but concluding the book's message of limits is worth listening to. Defenders of the book have argued that Nordhaus misunderstood or misrepresented the structure of the model and whether or not the model used historical data (it did) -- the title of Nordhaus's review was "World dynamics: measurement without data."

But, following the Krugman's link to his earlier article reveals that the "foolishness" or otherwise of the book Nordhaus supposedly demolished is really beside the point. Krugman's conclusion in that earlier article directly contradicts the Pollyanna claims he is now making:
You might say that this is my answer to those who cheerfully assert that human ingenuity and technological progress will solve all our problems. For the last 35 years, progress on energy technologies has consistently fallen below expectations. 
...
But anyway, while the Limits to Growth stuff of the 1970s was a mess, the history of energy technology doesn’t support extreme optimism, either.
You've really got to follow the links.

Keep asking the wrong question, Paul Krugman, and you won't have to think about the answer.

Krugman: "But it is, I think, a useful corrective to the rigorous-sounding but actually silly notion that you can’t produce more without using more energy."

Of course you can produce more while using less energy. That is what has been happening throughout history. That is the wrong question, though. The problem has to do with maintaining employment while reducing the absolute amount of energy used. Producing more with less energy doesn't solve that problem.

UPDATE: Krugman proved Mark Buchanan right: economists ARE blind to the limits of growth!

Monday, October 6, 2014

Problem: Kidneys Don’t Clear

The quote of the day comes from Nicholas Colas of the broker-dealer firm ConvergEx Group on the use of Google autofill to uncover economic trends: “....can the U.S. economy be doing all that well if (I want to sell a) ‘Kidney’ is a common autofill?”

The real problem, as any certified economist will tell you, is that, while “kidney” trends well on the sell side, it doesn’t show up if you type “I want to buy”.  A shame, all those useful kidneys just sitting there, like a glut in the gut.

Sunday, October 5, 2014

The Collapse of the Russian Ruble

There has been little attention to this in the western media, but the Russian ruble has suffered a major decline in the last few months.  Earlier this year it was running around 25-30 to the US dollar.  Now the official rate is at 40 and falling, with the rate at 50 to 1 at windows in Moscow.  There is major capital and human flight going on that is attracting increasing comment in Russian sources.

Two pro-Putin economists have subtly noted problems.  In English in Russia Behind the Headlines this past week, Alexander Shokhin, Director of the Union of Industrialists, has argued that the fall of the ruble will have some positive effects, particularly in the agro-industrial sector.  However, he also warns that it will be damaging in the high technology sector, along with the sanctions, most notably in the nanotechnology sector, which was getting off the ground and has now come to a complete halt.

In Russian, and perhaps more importantly, Herman (German) Gref, Chairman and President of Sberbank, the largest bank in Russia and its main holdover from the Soviet era, still partly state-owned and the descendant of the old postal savings bank of the USSR, has just made a widely distributed speech in which he calls for buckling down in the face of the loss of luxury imports, but also warns that the opportunities for import substitution are greatly exaggerated and will take a long time to work in any case.  He also notes the outflow of both skilled people and capital.  Gref reports that the Russian central bank leaders are making desperate efforts to prop up the ruble.

It is unclear the extent to which Putin is following all of this or cares.  To some extent he has used the cutoff of fancy food imports to stimulate nationalism by invoking the WW II model, which apparently many like.  But it may also be that part of the reason the truce in eastern Ukraine is still more or less holding, despite an ongoing battle for control over the Donetsk airport (currently controlled by the Ukrainians), may be that he is in fact aware of the economic impacts on Russia and really does not want things to get worse. 

Barkley Rosser

Salmon Roasted

I confess to being shocked by the exchange between Martin Wolf and Felix Salmon in this week’s New York Times Book Review.  First Salmon had criticized Wolf for using the terms “net external liability position” and “real unit labor costs” instead of “national debt” and “wages”.  Wolf pointed out that the first two terms and the second two mean different things and concludes, “[if] Salmon is unaware of such distinctions...., one must ask whether he was qualified to review the book.”

Salmon’s reply actually confirms that he doesn’t understand.  And this is really basic stuff, that anyone writing for a national audience ought to know.  How can Salmon explain this?

When was the last time we’ve seen such an embarrassing, high-visibility economics fail?

Exit, Voice and American Education

Once upon a time we had Paul Goodman, A. S. Neill and widespread disenchantment with public education.  It was bureaucratic, conformist, top-down and smothered creativity.  If you were countercultural it was hopelessly straight; if you were from a minority community it was neocolonial.  We hated it.

But what to do about it?  There were two fundamental options.  One was to demand to open it up, make it more democratic with local control and lots of input and accountability.  This was the way of voice.  The other was to give parents the option of taking their children out of schools they didn’t like and shopping around for one they did.  That was exit.

Both were tried.  We had showdowns over community control and much more intense politics around school board elections.  And we also had a revival of home schooling and the expansion of private schools, which now not only served the elite and religious families but also those fleeing from integration and changing neighborhood patterns.

But then exit won.  Educational policy was largely reframed around the concept of parental choice.  Students were viewed as primarily consumers of education services, and schools were to be subjected to market discipline.  The benefits of education were measured in individual student outcomes, from test scores to post-graduation earnings.  Schools and the teachers they employ should compete to produce the best outcomes, with the winners rewarded and the losers driven out of business.  The market model reached down into the early grades and extended upward to colleges and universities, who were now required to finance themselves through tuition—fee for services.

Maybe there is a good account of this process that explains why it turned out this way.  If there is I haven’t seen it.  But an adequate explanation has to take account of the larger picture, the supremacy of neoliberalism in most spheres of social and economic policy and the international dimensions of this shift.

This meditation is prompted by a story this morning about the highly successful schools labeled as “failing” under No Child Left Behind.  If this were simply about methods of classification I’m sure a solution could be found.  That, however, would miss the point of NCLB, which was never simply about assigning labels.

As originally proposed, schools deemed to be “failing” were to be replaced by charters, or parents would have the option of withdrawing from public schools altogether and taking their per-student money with them, or both.  In other words, it was a backdoor effort to privatize public education.  But this was too brazen, so it was scaled back.  Instead, “failing” schools would be penalized with less funding, with the expectation that they would fail that much more, precipitating a downward spiral that would result, sooner or later, in a less- or non-public replacement.  Of course, with this objective, the rules for what constitutes “failure” were written to practically guarantee that, sooner or later, every public school would qualify.  (Somehow charters and private schools were exempt from this defunding ploy.  Surprise.)

The problem was that, as the law was written, the rate of public school “failure” was much greater than the rate at which privatized alternatives could be created.  It was chaos.  So under Obama NCLB has been scaled back.  Public schools can buy more time.  They can institute pay and personnel policies tied to test scores to simulate market outcomes rather than subjecting schools to actual, flesh-and-blood exit.  States are rewarded for gradually turning their public schools into charters.  It is much more civilized, but the frame remains a service provision model in which funding exits when service quality is assessed as insufficient.  Since you can’t withdraw funding from the number of schools expected to be judged delinquent, you need a sector not subject to the rules—the privatized, market-driven sector.  NCLB is still predicated, in the end, on a systematic shift from the first to the second.

The irony in all this is that countries that have the best education systems, like Finland, rely principally on voice, not exit.  Teachers play a central role in creating curriculum and schools are deeply embedded in their communities.  The emphasis is on intrinsic motivation rather than carrot-and-stick market incentives.

Which brings us back to the main question: why, in the face of all the rather obvious arguments to the contrary, did the marketizers win in the US?  Why are they on the offensive worldwide, and why does public-anything—public schools, public broadcasting, public enterprise—fight a rearguard battle?

Friday, October 3, 2014

John Rawls and William S. Burroughs -- separated at birth?


Slowly, it dawned on The Economist that there was a catch...

From a special report, "Technology isn't Working," in The Economist:
The integration of large emerging markets into the global economy added a large pool of relatively low-skilled labour which many workers in rich countries had to compete with. That meant firms were able to keep workers’ pay low. And low pay has had a surprising knock-on effect: when labour is cheap and plentiful, there seems little point in investing in labour-saving (and productivity-enhancing) technologies. By creating a labour glut, new technologies have trapped rich economies in a cycle of self-limiting productivity growth. 
Fear of the job-destroying effects of technology is as old as industrialisation. It is often branded as the lump-of-labour fallacy: the belief that there is only so much work to go round (the lump), so that if machines (or foreigners) do more of it, less is left for others. This is deemed a fallacy because as technology displaces workers from a particular occupation it enriches others, who spend their gains on goods and services that create new employment for the workers whose jobs have been automated away. A critical cog in the re-employment machine, though, is pay. To clear a glutted market, prices must fall, and that applies to labour as much as to wheat or cars.
Meanwhile, at Counterpunch, Alan Nasser shows how to add two and two to get four, instead of "huh?":
There is an alternative, and the only one that is capable of addressing a situation in which profits and economic growth can no longer be achieved by investing in real production and hiring workers. An overripe, industrially saturated economy can be made into one that can deliver on capitalism’s false promises. All workers can be employed, but for far fewer hours, and a just living wage can be provided to all. This is the arrangement recommended by Marx and Keynes. 

The Housing Crisis in San Francisco

And not only San Francisco, but that’s the city in the news this morning.  There’s a ballot initiative to impose a 24% tax on developers who eject their tenants and flip properties within five years after acquiring them.  Behind this lies an explosive rise in rental prices, making the city increasingly unaffordable for all but the top few percent.

First, let’s clear away the bs and apply a bit of elementary economics.  On one side you have a flack for the local realtors association saying, according to the journalist’s paraphrase, “the steeper tax would simply be passed along in the sale price.”  On another you have supporters of the mayor touting a program to build an extra 30,000 units over a six year period.  Both would do well to learn about the price elasticity of demand, which is the key piece of information for such questions.  Tax increases on sellers are passed along depending on how inelastic demand is; if it’s relatively elastic they have to swallow most it themselves, which means that house prices would fall due to less lucrative development options.  If you can’t flip you won’t pay as much in the first place.  Meanwhile, the effect of any increase in supply on prices depends entirely on the same elasticity.  (I’m assuming for convenience that the rental-to-price ratio in housing remains constant.)  And what percentage increase in the housing stock does 30,000 units represent?

Do I know what this elasticity looks like in San Francisco?  No I don’t, but if I were a journalist for the most influential newspaper in the country I’d take the time to find out.  Message number one: journalists reporting on economic issues need to learn some economics.

(The one thing I’m pretty sure of, by the way, is that there is no single price elasticity of demand for housing in SF or anywhere else.  Properties are unique, not only physically but in terms of their location, and each has its own elasticity.  But averages would be OK in this situation, since we’re interested in average affordability.)

To its credit, the article discusses the issue of neighborhood stability, which is critical to any investigation of housing policy.  Social capital in its many senses is an externality in the housing market and, at least in principle, justifies intervention.  Whether a particular intervention does more good than harm, of course, is something that a priori theorizing is unable to answer.

If we pull back, rental price explosion in the urban core is a function of three things, geography, growth and inequality.  Geography: rent gradients are unavoidable, and geographic features like being situated on a peninsula, isthmus or island exacerbate them.  Growth attracts in-migration and is one of the prices cities pay for success.  The third, inequality, is the hidden beast behind housing unaffordability, and it was only obliquely touched on in the article.  As a society we pay a steep price for the increasing disconnect between the fortunes of the top stratum and everyone else.  One is that their demand for scarce goods drives up the cost for the rest of us.  Housing is perhaps the most important example.

So what to do?  I’d like to see some numbers on flippage in San Francisco.  My prior would be that, compared to the three factors above, it’s a minor aspect.  On balance it might slow the dissolution of social capital a bit.  You could go after geography by filling in the bay and turning it into high-density housing; this would have a rather larger impact, but I wouldn’t recommend it.  More investments in mass transit (BARTrification) would be a better way to change the geography of housing prices and would be beneficial in other respects.  That ought to be at the top of the list.

Even so, extreme inequality remains corrosive of neighborhood stability and just about every other communal value.  Even Hong Kong, where about half the housing is publicly owned and deeply subsidized, has been plunged into a housing crisis due to geography, growth, but especially the shift to development aimed at the super-rich.  Cities can’t solve this problem on their own.

Thursday, October 2, 2014

France, Germany and the Politics of “Reform”

A showdown appears to be shaping up between France and Germany.  France’s new budget openly defies the “Stability and Growth” (sic) limits on fiscal deficits and debt as a percent of GDP, and German officials have been making public statements that the limits need to be adhered to.  On the face of it, if Germany wants, France could be found in breach by Brussels and assessed significant fines.  But this is about politics, not money: the threat is to the two-country alliance at the heart of the entire European project.

France is not the only violator, of course, only the largest and most essential.  If Greece, Spain and other severely distressed economies transgress they can be written off as exceptions; if France ignores the rules the rules are over.  Clearly a common currency cannot survive without macroeconomic coordination, so rules are needed, but Germany has thus far resisted any move to rewrite them.

Since the pivot to austerity, a compromise has governed EU politics: distressed economies are given more time to run deficits, but they are supposed to implement “reforms” that make their economies more competitive and permit them to grow their way out of the red.  As politics, it works, sort of.  It buys time and gives the creditor countries, mainly Germany, a sacrificial offering.  As economics it is pure illusion.

I am not referring to the fact that these reforms are more talked about than adopted.  Nor do I think that reforms of various sorts wouldn’t be beneficial.  France, for instance, is greatly overcentralized; too many decisions are made in Paris, which is stultifying both politically and economically.  My point is that the reforms deficit countries are supposed to adopt have almost nothing to do with why they run deficits or how they can escape them.

There are lots of ways to show this, but let’s take the simplest.  Germany and France are both in economic stall, although Germany is in better shape, with minimal public deficits and much lower unemployment.  So the solution is for France to reform its economy to be more like Germany, right?

Well, consider one such reform, opening up the professional labor market to competition.  There’s a story on this in today’s New York Times.  The French government has passed a law allowing pharmaceuticals to be sold in stores other than pharmacies.  Of course, the pharmacists have gone on strike, and there’s a photo of a group of them demonstrating against the plan.  This is exactly the sort of liberalization that the Germans are demanding, and the article cites Merkel as telling the French they need to make even “bolder changes” in how their economy operates.

But try to buy aspirin in Germany.

The German economy is one of the least liberal, most rule-bound on the planet.  You need all sorts of approval to start a business.  If a company wants to fire a worker it has to deal with the (mandatory) works council.  Entry into professions is tightly controlled, and professional associations wield immense power.  The majority of bank assets are held in public or cooperative banks, and credit is allocated on an openly political basis.  Actually, I like a lot of this and think the social dimension of the German economy should be developed, not shrunk.  But for now it is enough to see that the “failure” of Germany to “reform” has not prevented it from running big external surpluses, and that the economic map of Europe simply does not correspond to its various shades of liberalism.

The politics of “reform” is driven by symbolism and superstition, not economics.  Since the politics are asymmetric, you don’t have to look hard to see the hypocrisy.

All I need to know about Iraq

Cullen Murphy's 2007 book entitled 'The New Rome - The fall of an empire and the fate of America' includes some challenging observations about the behaviour of Americans in the Iraqi Green Zone after the US-led invasion in 2003.

"Bureaucrats and civilian experts representing scores of government agencies, oblivious to culture or history,  were brought in to create an embryonic version of American government for the Iraqis to adopt as their own.  Americans were enlisted to help draft a new constitution.  They drew up scores of new American-inspired laws to address even the least urgent matters, such as patents and copyrights and other kinds of intellectual property.  They created shadow ministries of agriculture, education, electricity, human rights, oil, trade, youth and sports, and more.  Monday night seminars were held to teach prominent Iraqis the basics of a free-market economy..."

One department, Murphy writes, sought to impose a 15% flat tax - "this in a nation where taxes had neither been levied nor paid."...

Cullen Murphy gives other examples of what is portrayed as a disrespectful subjugation of the Iraqi people.  He warns about the phenomenon of 'blowback' that was likely to arise years hence from the American attempt to "create a preconceived template on an evolving reality".

I wonder whether the Iraqi people will want to maintain the pretense of participation in this western-imposed experiment.  Or will America's empire fracture into autonomous regions and fall like that of the ancient Roman version - as "an imaginative experiment that got a little out of hand". [*]

*  Walter Goffart quote.