Saturday, October 11, 2014

Chile and Beer

Sorry about the header, but there’s been renewed interest in the pioneering work of Stafford Beer in Allende’s Chile, first as a result of Eden Medina’s Cybernetic Revolutionaries (2011) and now the article by Evgeny Morozov in the current New Yorker.  Beer was a management specialist who applied cybernetic principles to business organization.  He was brought to Chile to design a cybernetic planning system for the entire economy, Cybersyn, which died stillborn when Allende was overthrown in a coup in 1973.  Undoubtedly, this is one of the great might-have-beens in the twentieth century: what could Beer have built if he had been given enough time and resources?

I have more than a passing interest in this topic.  I regularly teach a course called Alternatives to Capitalism, and I used the Medina book in the most recent iteration.  This fall I am working on a paper (as a co-author) that brings together Beer’s “viable model” of the firm with economic analysis, with a focus on the determinants of worker autonomy.  I’ve been imbibing Beer regularly for some time.

There is a lot to say about the new round of Beer enthusiasm.  I don’t want to get into a pissing contest (where are these puns coming from?), but I believe Morozov is quite wrong to identify Beer with the Soviet cyberneticians depicted, for example, in Francis Spufford’s marvelous Red Plenty.  The Soviet reformers wanted to use computers to calculate efficient planning prices; there were no prices in Beer’s model.  There was no bottom-up reversal of authority in the Soviet vision, not even in theory, whereas that was intended to be the distinctive aspect of Cybersyn, the feature that would make it “really” socialist.  While computers played a role in both approaches, it’s a mistake to put too much weight on them, big and heavy as they were back then.  Beer, after all, had made his reputation with minimum knowledge or use of computers; his cybernetics was organizational and conceptual.

But I don’t think that the actual potential of Cybersyn matched Beer’s vision for it, and its shortcomings even during the limited period in which it was in partial operation bear this out.  Beer’s critics were right, in fact: this really was a project whose result could only be to intensify centralized control over decisions at lower levels—the computer as Big Brother.  Production systems were taken as given at the enterprise level, and the only questions were those asked in operations research: how much should we dial up this process or dial down that one?  People were simply instruments in this framework; they had no ability to change the questions that were being asked.

From an economic point of view, I’m afraid Beer did not rise to the Hayek challenge.  Cybersyn processed information about the throughput of materials and products far more efficiently than the Hayek of 1937 could have imagined, but it left unexamined the problem of how information is ultimately generated.  An internet of things can tell you what materials are going where, but it can’t identify promising innovations in production systems or tell you which innovations should be replicated and which discarded.  Worse, it has no way to assess the quality of what’s being produced, since it is primarily consumers who need to be able to decide this.  Hayek is surely right that what we would now call parallel processing is needed to implement trial-and-error methods in real time, and there need to be incentives for improved production methods and higher quality.  Hayek, non-Walrasian that he was, would probably say, and I would agree with him, that Beer’s model works well within firms but not between them, since coordination is not the primary problem that economies, rather than firms, need to solve.  (The deep problem is coordinate to do what and in what way?)

I’m compressing a much more detailed argument and should probably stop here.  None of this, incidentally, has to do with the paper I’m writing, since that one is about the theory of the firm.  I should also add that Beer’s ideas are valuable and can be incorporated into a better model of economic planning, just not the way he went about it.  The guy was brilliant but he didn’t know much economics.

UPDATE: Here are two more thoughts about Cybersyn.

A. For Beeristas, it should be disturbing that his Chilean model lacked a System II, in this case meaning there was no provision for horizontal communication between firms.  All information flowed up and down, passing through the center.  In fact, it was all System III—with no apparent Systems IV or V.  System III is the element of command-based hierarchy.

B. Mechanical application of the viable systems model to whole economies is a dubious enterprise.  The clearest evidence for this is the large role that markets play at present.  Markets do not exemplify any of Beer’s systems beyond System I (direct activity of the units); they operate on a different basis.  This doesn’t mean that markets are perfect or that planning is impossible, only that before you start postulating how economies need to be organized you ought to take a close look at how markets do this.  Specifically, as I tried to explain above, markets accomplish several functions that are necessary to a modern economy but are not addressed by Cybersyn.  Does this imply a division of labor?  What division?

To put it in Beerian terms, Cybersyn is not an economic brain.  What it approximates is the autonomic nervous system, in the sense that János Kornai and Béla Martos described it in Autonomous Control of the Economic System.  It’s fine for a paramecium but rather limited for a human.

Hiatt Hysterical Over Losing His Schtick

Poor Fred Hiatt.  For years, this Editor of the Editorial page of the Washington Post has made his named appearances on the editorial page (he daily bloviates the main ed lead anonymously) only to call for cutting Social Security, and occasionally Medicare as well.  This has been his schtick for many years.  Now it is over, but he fails to recognize it.

OK, for some time I have been ridiculing him over this obsession of his, which he has imposed on many other regular writers on WaPo's ed page, including R.J. Samuelson, Ruth Marcus, and even more recently, Catherine Rampell.  I almost wrote on this when he went nuts over this on Monday, but Dean Baker  whonked on him pretty solidly immediately, pointing out how stupid and ridiculous he looked, declaring that while today's US debt/GDP ratio is 74%, with near zero interest rates, ten years from now the CBO says it will be 78%, which Hiatt hysterically declared to be "dangerous."  The 104% forecast for 2039 he declared to be "unsustainable," which Dean correctly pointed out was totally ridiculous.  So, I did not post anything.
   
Needless to say, the ridicule has mounted, some of it more general, some of it more specific.  So, Paul Krugman has pointed out the problem of "secret deficit lovers," people who have made a living whining about deficit dangers, but now that the latest reports say the deficit is going down are unhappy, because their longstanding calls to cut benefits for old people are not likely to be taken seriously in the near future. PK named no names, but Fred Hiatt is near the top of the list, if not absolutely at the top.  More personally, John Podesta, whom he cited in his Monday WaPo piece, perhaps the single most stupid and embarassing column he has ever written, has dumped all over him in on Twitter with an accompanying column in yesterday's WaPo, as linked to by Mark  Thoma.

So, let me add my two bits to this that none of the above have yet said.  First of all, it is amazing that when confronted with good news from the CBO that medical care costs are falling, leading to declining future deficit projections, Hiatt does not applaud, indeed, does not anywhere in his column even note that this is a change in the future projections.  He does note the new data, without noting how it reduces the hysteria of his past columns, but he continues to whine that while in the past Obama appointed the Bowles-Simpson commission that called for cuts in senior entitlements, along with tax increases that GOP members of that commission would not accept (see Paul Ryan), which somehow for years Hiatt has accepted as something irresolutely unbridgeable (he briefly noted that tax increases were one way out of senior entitlement problems, but did not remotely recommend them, despite longstanding polls showing support for exactly this solution to any such problem seriously arising in the future), he simply cannot bring himself to admit that the problem he has been carrying on about for so many years so hysterically simply is not what he claimed it was.  He, and many of his close pals, have simply been wrong wrong wrong.

So, here we have poor Hiatt, resolutely ignoring good news.  Not a whisper in his column that in fact the CBO is now forecasting not only continuing deficit reductions in the near future (with most of the US public still mistakenly believing that they are higher, with no help from WaPo on informing them otherwise).  CBO carefully does not project forward further reductions in med care costs, and Hiatt does not even remotely raise the possibility of such, much less the idea that maybe the way to avoid having a debt/GDP ratio over 100 a quarter of a century from now might be to focus on continuing the effort of Obama to bring down med care costs in the US to OECD levels.  Dean Baker has long pointed out that if our med care costs were at OECD levels, we would not have this long term deficit problem at all.  And there are many obvious ways to move in that direction, from reducing the power of pharma patents to loosening immigration rules for physicians, along with many others.

So, I feel sorry for Fred.  Beating up on seniors who have paid in their taxes for what they are getting has been the one an only topic that has  inspired him to write columns under his own name for many years.  The new projections of lower deficits, good news to most of us, simply do not register with him.  Actually, they probably do.  But Krugman is right.  As much as anybody, he is the longstanding VSP in DC who has been whining for years about cutting Social Security and Medicare, whose excuse for this argument has simply disappeared, but he and his pals simply are not willing to face the new facts.

Barkley Rosser

Friday, October 10, 2014

The Saudi Problem With Daesh (aka ISIS/ISIL/IS)

Crossroads Arabia reports on an article in Saudi newspaper al-Monitor by commentator, Bader al-Rashed.  He is upset that apparently Daesh (aka ISIS/ISIL/IS) is distributing books in its territory of control by Muhammed ibn Wahhab, the founder of the Wahhabist movement that is the ruling ideology of the Saudi royal family since the 1740s.  This would suggest that indeed Daesh is strongly Wahhabist in its fundamental orientation.

Al-Rashed in turn argues that no, they are not.  Daesh are really Kharijites, a Muslim group from the early days of Islam that was neither Sunni nor Shi'i, and was strict in its views and was based mostly in what is now southern Iraq.  They were famous for their intense takfirism, a practice of excommunicating people they viewed as not being proper Muslims.  That would indeed seem to be something that Daesh likes to do. This is tied with the notion of apostasy, which is outlawed in 21 Muslim countries and punishable by death.  I note that some interpreters of the Qur'an read the relevant passages as allowing for amputation or expulsion as alternatives, and certainly the last of these would be far more humane. 

There are no self-declared Kharijites anywhere in the world now, and Daesh does not identify itself as such.  The closest group, although more moderate than the old Kharijites, would be Ibadis, a group descended from a close relative of the Kharijites.  They are dominant in Oman today and are neither Sunni nor Shi'i, actually seeming more moderate than most nations ruled by either of those.

In any case, it must be recognized that Daesh is drawing strongly on fundamental theology of the Saudis.  The latter must oppose them because their declaraion of caliphate says they should rule Mecca, Medina, and al Quds, (Jerusalem).  The king of Saudi Arabia;s proudest title is Protector of the Holy Cities, Mecca and Medina, with a successful Haj just completed.  They do not claim to be caliphs, but do not wish to give up their rule of those cities, or that title.

Barkley Rosser

A throw of the D.I.C.E. will never abolish chance

In Building a Green Economy, NYT Magazine, April 7, 2010, Paul Krugman wrote,
As Harvard’s Martin Weitzman has argued in several influential papers, if there is a significant chance of utter catastrophe, that chance — rather than what is most likely to happen — should dominate cost-benefit calculations. And utter catastrophe does look like a realistic possibility, even if it is not the most likely outcome. 
Weitzman argues — and I agree — that this risk of catastrophe, rather than the details of cost-benefit calculations, makes the most powerful case for strong climate policy. Current projections of global warming in the absence of action are just too close to the kinds of numbers associated with doomsday scenarios. It would be irresponsible — it’s tempting to say criminally irresponsible — not to step back from what could all too easily turn out to be the edge of a cliff.
... 
So what I end up with is basically Martin Weitzman’s argument: it’s the nonnegligible probability of utter disaster that should dominate our policy analysis. And that argues for aggressive moves to curb emissions, soon.
So far, so good. But Krugman's conclusion produced this pretzel of cognitive dissonance:
...there has to be a real chance that political support for action on climate change will revive. 
If it does, the economic analysis will be ready [no, it isn't]. We know how to limit greenhouse-gas emissions [no, we don't]. We have a good sense of the costs [nope] — and they’re manageable [how could we know?]. All we need now is the political will.
Krugman apparently assumed that the cost estimates developed, for example, in Nordhaus's "dynamic integrated climate-economy" (DICE) analyses are independent of when the greenhouse gas abatement actions are taken. But the rationale for delaying abatement is that the discount rate assumed in the model makes it cheaper to wait to do the abatement. Weitzman's critique doesn't present cost estimates. Contra Krugman, there is not even a consensus about what needs to be done or how to do it, let alone "a good sense of the costs" of doing... it? Whatever "it" is.

The bottom line (literally) is that a key consideration of the structure and assumptions of the conventional models was facilitating economic growth and relying on that economic growth to finance the costs of abatement. The DICE were loaded for growth! To put it somewhat crudely, delaying abatement was supposed to make a large part of the cost "pay for itself" through the dividends earned on the money saved by not doing it now. You cannot have your cake and eat it too. Nor can you finance a current expenditure from revenues you would have earned if you hadn't made the expenditure.

The excerpts and abstracts below are not from people Krugman would ridicule as "degrowthers." There seems to be a dawning awareness that the assumptions of the conventional integrated assessment models need to be, at the very least, radically revised, which is essentially the point those silly anti-capitalist degrowthers on the left (going back to that silly anti-capitalist leftist Nicolaus Georgescu-Roegen)  have been making all along.

"Climate Change Policy: What Do the Models Tell Us?" Robert S. Pindyck
Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g., the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.
"Climate Policy: Science, Economics, and Extremes," Anthony C. Fisher and Phu V. Le
Economic models help illustrate the links between the climate and the economy, and they are an important component of the multidisciplinary analysis that is needed to address climate change. However, there are major problems with the estimates of potential damages in the IAMs... First, damage functions and estimates appear to have little connection to the empirical findings from econometric studies of sectoral impacts, particularly on agriculture, as we discuss later. More generally, economy-wide damage functions are simply not known, especially at the global level. Thus, as Pindyck (2013b) argues, there is little empirical, or for that matter theoretical, foundation for the specification of functional forms and parameters in the models. This suggests that their quantitative results and policy prescriptions are somewhat arbitrary. 
We agree with Stern (2013) that there are gross underestimations of damages in economic impact models and IAMs, and we discuss some additional issues that are not adequately addressed in the models including the importance of nonlinearities, environmental impacts, extreme events, and capital losses.
"Endogenous growth, convexity of damages and climate risk: how Nordhaus' framework supports deep cuts in carbon emissions." Simon Dietz and Nicholas Stern
'To slow or not to slow' (Nordhaus, 1991) was the first economic appraisal of greenhouse gas emissions abatement and founded a large literature on a topic of great, worldwide importance. In this paper we offer our assessment of the original article and trace its legacy, in particular Nordhaus' later series of 'DICE' models. From this work many have drawn the conclusion that an efficient global emissions abatement policy comprises modest and modestly increasing controls. On the contrary, we use DICE itself to provide an initial illustration that, if the analysis is extended to take more strongly into account three essential elements of the climate problem -- the endogeneity of growth, the convexity of damages, and climate risk -- optimal policy comprises strong controls. To focus on these features and facilitate comparison with Nordhaus' work, all of the analysis is conducted with a high pure-time discount rate, notwithstanding its problematic ethical foundations. [We have argued elsewhere that careful scrutiny of the ethical issues around pure-time discounting points to lower values than are commonly assumed (usually with little serious discussion).]
"Climate Risks and Carbon Prices: Revising the Social Cost of Carbon," Frank Ackerman and Elizabeth A. Stanton
Once the social cost of carbon is high enough to justify maximum feasible abatement in cost-benefit terms, then cost-benefit analysis becomes functionally equivalent to a precautionary approach to carbon emissions. All that remains for economic analysis of climate policy is to determine the cost-minimizing strategy for eliminating emissions as quickly as possible. This occurs because the marginal damages from emissions have become so large; the uncertainties explored in our analysis, regarding damages and climate sensitivity, imply that the marginal damage curve could turn nearly vertical at some point, representing a catastrophic or discontinuous change.  
The factors driving this result are uncertainties, not known facts. We cannot know in advance how large climate damages, or climate sensitivity, will turn out to be. The argument is analogous to the case for buying insurance: it is the prudent choice, not because we are sure that catastrophe will occur, but because we cannot be sufficiently sure that it will not occur. By the time we know what climate sensitivity and high-temperature damages turn out to be, it will be much too late to do anything about it. The analysis here demonstrates that plausible values for key uncertainties imply catastrophically large values of the social cost of carbon.

Our results offer a new way to make sense of the puzzling finding by Martin Weitzman: his “dismal theorem” establishes that under certain assumptions, the marginal benefit of emission reduction could literally be infinite (Weitzman 2009). The social cost of carbon, which measures the marginal benefit of emission reduction, is not an observable price in any actual market. Rather, it is a shadow price, deduced from an analysis of climate dynamics and economic impacts. Its only meaning is as a guide to welfare calculations; we can obtain a more accurate understanding of the welfare consequences of policy choices by incorporating that shadow price for emissions.

Limits to Growth One More Time

Not much time this morning, but I’d like to respond to the dustup between Mark Buchanan and Paul Krugman over whether energy (and other resource) use can be decoupled from GDP growth.

1. I get the impression that Buchanan identifies GDP with “stuff”, at least subconsciously.  But GDP is value, what people are willing to pay for.  When I teach an econ class, that’s a component of GDP.  If an extra student shows up, that’s GDP growth.  Of course, a lot of GDP really is stuff, but as economies develop they tend to become less stuffy.  Overall not enough, but how unstuffy they could become is an empirical, not a theoretical matter.

2. But I agree with Buchanan that increases in energy efficiency alone are unlikely to accomplish what we need to contain climate change.  Absent changes on other fronts, the growth of demand for energy services will simply swamp the effect of greater efficiency.  This has been true in the past and any realistic projection puts it in our future as well.

3. And Buchanan is also right that there is no historical precedent for the kind of decoupling between economic growth and fossil fuel use (let’s be specific here) that we would need to meet both economic and climate goals.  The notion of foregoing most of our remaining supplies of extremely energy-dense minerals flies in the face of all of human history.  That’s why it will be a big challenge to bring it off.  The challenge begins with putting in place a policy that prohibits most fossil fuel development.  You can discuss the particulars, but there is no getting around the need for a binding constraint.  The reason is exactly the one that Buchanan pinpoints: increases in efficiency and even increases in renewable energy sources alone will not be sufficient by themselves to offset the energy demands stemming from global GDP growth.

Can we compel most fossil fuels to stay in the ground and still have economic growth?  Since this is about growth in value and not necessarily stuff, the answer still seems to be yes.  But we won’t have a sufficient shift away from stuffiness without measures that prohibit dangerous levels of fossil fuel extraction.  An unprecedented change in the trajectory of economic growth requires unprecedented policies.

Postscript: I’m not interested in whether “unlimited” economic growth at some distant future date is incompatible with resource constraints.  It’s also true that economic growth can’t continue after the universe collapses in on itself.  We’ll let distant future people, if they still exist, worry about this.

Be More Like Us

In this morning’s New York Times I read:
Wolfgang Schäuble, the German finance minister, speaking in Washington on Thursday, insisted that “writing checks” was no way for the eurozone to increase growth, according to Reuters. Mr. Schäuble urged France and Italy to do more to overhaul their economies instead.
Yes, the first priority is for the rest of the Eurozone to be more like Germany.  They can begin by requiring firms to belong to industry associations that tax their members to finance apprenticeship and other programs, putting worker representatives on firms’ supervisory bodies and transferring the majority of banking assets to noncommercial (public and cooperative) institutions.  That’s Schäuble’s plan, right?

Thursday, October 9, 2014

Slow steam, fat tails and a dismal theorem

Paul Krugman's column the other day, invoking William Nordhaus's "demolition" of the forecasting model in The Limits to Growth, got me wondering about how well Nordhaus's indictment has stood up over the years. So I started poking around in the archives.

Twenty years after publication of Limits to Growth, the research team reconvened in 1992 for Beyond the Limits, an  update of the earlier analysis. Nordhaus, too, followed up his earlier "blistering" review with a critique of the second version. This second review was more conciliatory, albeit still critical of the Limits to Growth and Beyond the Limits assumptions and conclusions:
While the LTG school argued that economic decline was inevitable and economists argued that the LTG argument was fallacious, the argument is ultimately an empirical matter. Put differently, critics would have gone too far had they claimed that the postulated pessimistic scenario could not hold.
Instead of simply "demolishing" the LTG model, in his second review, Nordhaus responded with his own simple model, using more a conventional generalized Cobb-Douglas production function.
Like LTG models, the general model given in the last section shows the tendency toward economic decline. In addition, there are no less than four conditions, each of which is satisfied in the LTG model, that will lead to ultimate economic stagnation, decline, or collapse...  
[However]...the entire argument can be reversed with a simple change in the specification of the model; more precisely, I will introduce technological change into the production structure and assume that the Cobb-Douglas production function accurately represents the technological possibilities for substitution.
"Ultimately, then," Nordhaus concluded his discussion of simple growth models,
...the debate about future of economic growth is an empirical one, and resolving the debate will require analysts to examine fundamental structural parameters of the economy... How large are the drags from natural resources and land? What is the quantitative relationship between technological change and the resource-land drag? How does human population growth behave as incomes rise? How much substitution is possible between labor and capital on the one hand, and scarce natural resources, land, and pollution abatement on the other? These are empirical questions that cannot be settled solely by theorizing.
One of the discussants for Nordhaus's 1992 Brookings paper was Martin Weitzman, who described it as "an outstanding paper." that "represents the economic state of the art, circa 1992, in dealing seriously and honestly with the major limits-to-growth arguments." One could almost imagine hearing the scalpel being quietly honed as Weitzman administered that subtle anesthesia.

Fast forward another two decades and it is Nordhaus's turn to comment on a paper by Weitzman, "On modeling and interpreting the economics of catastrophic climate change."
In an important paper, Weitzman (2009) has proposed what he calls a dismal theorem. He summarizes the theorem as follows: "[T]he catastrophe-insurance aspect of such a fat-tailed unlimited-exposure situation, which can never be fully learned away, can dominate the social-discounting aspect, the pure-risk aspect, and the consumption-smoothing aspect." The general idea is that under limited conditions concerning the structure of uncertainty and societal preferences, the expected loss from certain risks such as climate change is infinite and that standard economic analysis cannot be applied.
Nordhaus concluded his discussion of Weitzman's theorem on a somber and humble note:
In many cases, the data speak softly or not at all about the likelihood of extreme events. This means that reasonable people may have quite different views about the likelihood of extreme events, such as the catastrophic outcomes of climate change, and that there are no data to adjudicate such disputes. This humbling thought applies more broadly, however, as there are indeed deep uncertainties about virtually every issue that humanity faces, and the only way these uncertainties can be resolved is through continued careful consideration and analysis of all data and theories.
The word "growth" doesn't appear in Nordhaus's 16-page commentary. Pascal's wager, anyone?



Tuesday, October 7, 2014

Krugman, straw man, beggar, thief

In his "Slow Steaming..." column today, Paul Krugman blatantly misrepresented and trivialized Mark Buchanan's argument:
Buchanan says that it’s not possible to have something bigger — which is apparently what he thinks economic growth has to mean — without using more energy.
Wrong. Buchanan said bigger things, as a rule, use more energy. He also said that efficiencies of scale don't overturn that rule. He referred to data about economic growth and energy use, not "what he thinks economic growth has to mean":
Data from more than 200 nations from 1980 to 2003 fit a consistent pattern: On average, energy use increases about 70 percent every time economic output doubles. This is consistent with other things we know from biology. Bigger organisms as a rule use energy more efficiently than small ones do, yet they use more energy overall. The same goes for cities. Efficiencies of scale are never powerful enough to make bigger things use less energy.
Krugman disparaged Buchanan for reprising the claims and title of the book that William Nordhaus "demolished so effectively forty years ago":
...not only does he make the usual blithe claims about what economists never think about; even his title is almost exactly the same as the classic (in the sense of classically foolish) Jay Forrester book that my old mentor, Bill Nordhaus, demolished so effectively forty years ago.
I'll leave it to posterity whether or not Nordhaus "demolished" Limits to Growth. Forty years on, Brian Hayes wrote a less "blistering" critique, published in American Scientist, judging Forrester's mathematical model to be "more of a polemical tool than a scientific instrument" but concluding the book's message of limits is worth listening to. Defenders of the book have argued that Nordhaus misunderstood or misrepresented the structure of the model and whether or not the model used historical data (it did) -- the title of Nordhaus's review was "World dynamics: measurement without data."

But, following the Krugman's link to his earlier article reveals that the "foolishness" or otherwise of the book Nordhaus supposedly demolished is really beside the point. Krugman's conclusion in that earlier article directly contradicts the Pollyanna claims he is now making:
You might say that this is my answer to those who cheerfully assert that human ingenuity and technological progress will solve all our problems. For the last 35 years, progress on energy technologies has consistently fallen below expectations. 
...
But anyway, while the Limits to Growth stuff of the 1970s was a mess, the history of energy technology doesn’t support extreme optimism, either.
You've really got to follow the links.

Keep asking the wrong question, Paul Krugman, and you won't have to think about the answer.

Krugman: "But it is, I think, a useful corrective to the rigorous-sounding but actually silly notion that you can’t produce more without using more energy."

Of course you can produce more while using less energy. That is what has been happening throughout history. That is the wrong question, though. The problem has to do with maintaining employment while reducing the absolute amount of energy used. Producing more with less energy doesn't solve that problem.

UPDATE: Krugman proved Mark Buchanan right: economists ARE blind to the limits of growth!

Monday, October 6, 2014

Problem: Kidneys Don’t Clear

The quote of the day comes from Nicholas Colas of the broker-dealer firm ConvergEx Group on the use of Google autofill to uncover economic trends: “....can the U.S. economy be doing all that well if (I want to sell a) ‘Kidney’ is a common autofill?”

The real problem, as any certified economist will tell you, is that, while “kidney” trends well on the sell side, it doesn’t show up if you type “I want to buy”.  A shame, all those useful kidneys just sitting there, like a glut in the gut.

Sunday, October 5, 2014

The Collapse of the Russian Ruble

There has been little attention to this in the western media, but the Russian ruble has suffered a major decline in the last few months.  Earlier this year it was running around 25-30 to the US dollar.  Now the official rate is at 40 and falling, with the rate at 50 to 1 at windows in Moscow.  There is major capital and human flight going on that is attracting increasing comment in Russian sources.

Two pro-Putin economists have subtly noted problems.  In English in Russia Behind the Headlines this past week, Alexander Shokhin, Director of the Union of Industrialists, has argued that the fall of the ruble will have some positive effects, particularly in the agro-industrial sector.  However, he also warns that it will be damaging in the high technology sector, along with the sanctions, most notably in the nanotechnology sector, which was getting off the ground and has now come to a complete halt.

In Russian, and perhaps more importantly, Herman (German) Gref, Chairman and President of Sberbank, the largest bank in Russia and its main holdover from the Soviet era, still partly state-owned and the descendant of the old postal savings bank of the USSR, has just made a widely distributed speech in which he calls for buckling down in the face of the loss of luxury imports, but also warns that the opportunities for import substitution are greatly exaggerated and will take a long time to work in any case.  He also notes the outflow of both skilled people and capital.  Gref reports that the Russian central bank leaders are making desperate efforts to prop up the ruble.

It is unclear the extent to which Putin is following all of this or cares.  To some extent he has used the cutoff of fancy food imports to stimulate nationalism by invoking the WW II model, which apparently many like.  But it may also be that part of the reason the truce in eastern Ukraine is still more or less holding, despite an ongoing battle for control over the Donetsk airport (currently controlled by the Ukrainians), may be that he is in fact aware of the economic impacts on Russia and really does not want things to get worse. 

Barkley Rosser

Salmon Roasted

I confess to being shocked by the exchange between Martin Wolf and Felix Salmon in this week’s New York Times Book Review.  First Salmon had criticized Wolf for using the terms “net external liability position” and “real unit labor costs” instead of “national debt” and “wages”.  Wolf pointed out that the first two terms and the second two mean different things and concludes, “[if] Salmon is unaware of such distinctions...., one must ask whether he was qualified to review the book.”

Salmon’s reply actually confirms that he doesn’t understand.  And this is really basic stuff, that anyone writing for a national audience ought to know.  How can Salmon explain this?

When was the last time we’ve seen such an embarrassing, high-visibility economics fail?

Exit, Voice and American Education

Once upon a time we had Paul Goodman, A. S. Neill and widespread disenchantment with public education.  It was bureaucratic, conformist, top-down and smothered creativity.  If you were countercultural it was hopelessly straight; if you were from a minority community it was neocolonial.  We hated it.

But what to do about it?  There were two fundamental options.  One was to demand to open it up, make it more democratic with local control and lots of input and accountability.  This was the way of voice.  The other was to give parents the option of taking their children out of schools they didn’t like and shopping around for one they did.  That was exit.

Both were tried.  We had showdowns over community control and much more intense politics around school board elections.  And we also had a revival of home schooling and the expansion of private schools, which now not only served the elite and religious families but also those fleeing from integration and changing neighborhood patterns.

But then exit won.  Educational policy was largely reframed around the concept of parental choice.  Students were viewed as primarily consumers of education services, and schools were to be subjected to market discipline.  The benefits of education were measured in individual student outcomes, from test scores to post-graduation earnings.  Schools and the teachers they employ should compete to produce the best outcomes, with the winners rewarded and the losers driven out of business.  The market model reached down into the early grades and extended upward to colleges and universities, who were now required to finance themselves through tuition—fee for services.

Maybe there is a good account of this process that explains why it turned out this way.  If there is I haven’t seen it.  But an adequate explanation has to take account of the larger picture, the supremacy of neoliberalism in most spheres of social and economic policy and the international dimensions of this shift.

This meditation is prompted by a story this morning about the highly successful schools labeled as “failing” under No Child Left Behind.  If this were simply about methods of classification I’m sure a solution could be found.  That, however, would miss the point of NCLB, which was never simply about assigning labels.

As originally proposed, schools deemed to be “failing” were to be replaced by charters, or parents would have the option of withdrawing from public schools altogether and taking their per-student money with them, or both.  In other words, it was a backdoor effort to privatize public education.  But this was too brazen, so it was scaled back.  Instead, “failing” schools would be penalized with less funding, with the expectation that they would fail that much more, precipitating a downward spiral that would result, sooner or later, in a less- or non-public replacement.  Of course, with this objective, the rules for what constitutes “failure” were written to practically guarantee that, sooner or later, every public school would qualify.  (Somehow charters and private schools were exempt from this defunding ploy.  Surprise.)

The problem was that, as the law was written, the rate of public school “failure” was much greater than the rate at which privatized alternatives could be created.  It was chaos.  So under Obama NCLB has been scaled back.  Public schools can buy more time.  They can institute pay and personnel policies tied to test scores to simulate market outcomes rather than subjecting schools to actual, flesh-and-blood exit.  States are rewarded for gradually turning their public schools into charters.  It is much more civilized, but the frame remains a service provision model in which funding exits when service quality is assessed as insufficient.  Since you can’t withdraw funding from the number of schools expected to be judged delinquent, you need a sector not subject to the rules—the privatized, market-driven sector.  NCLB is still predicated, in the end, on a systematic shift from the first to the second.

The irony in all this is that countries that have the best education systems, like Finland, rely principally on voice, not exit.  Teachers play a central role in creating curriculum and schools are deeply embedded in their communities.  The emphasis is on intrinsic motivation rather than carrot-and-stick market incentives.

Which brings us back to the main question: why, in the face of all the rather obvious arguments to the contrary, did the marketizers win in the US?  Why are they on the offensive worldwide, and why does public-anything—public schools, public broadcasting, public enterprise—fight a rearguard battle?

Friday, October 3, 2014

John Rawls and William S. Burroughs -- separated at birth?


Slowly, it dawned on The Economist that there was a catch...

From a special report, "Technology isn't Working," in The Economist:
The integration of large emerging markets into the global economy added a large pool of relatively low-skilled labour which many workers in rich countries had to compete with. That meant firms were able to keep workers’ pay low. And low pay has had a surprising knock-on effect: when labour is cheap and plentiful, there seems little point in investing in labour-saving (and productivity-enhancing) technologies. By creating a labour glut, new technologies have trapped rich economies in a cycle of self-limiting productivity growth. 
Fear of the job-destroying effects of technology is as old as industrialisation. It is often branded as the lump-of-labour fallacy: the belief that there is only so much work to go round (the lump), so that if machines (or foreigners) do more of it, less is left for others. This is deemed a fallacy because as technology displaces workers from a particular occupation it enriches others, who spend their gains on goods and services that create new employment for the workers whose jobs have been automated away. A critical cog in the re-employment machine, though, is pay. To clear a glutted market, prices must fall, and that applies to labour as much as to wheat or cars.
Meanwhile, at Counterpunch, Alan Nasser shows how to add two and two to get four, instead of "huh?":
There is an alternative, and the only one that is capable of addressing a situation in which profits and economic growth can no longer be achieved by investing in real production and hiring workers. An overripe, industrially saturated economy can be made into one that can deliver on capitalism’s false promises. All workers can be employed, but for far fewer hours, and a just living wage can be provided to all. This is the arrangement recommended by Marx and Keynes.