Wednesday, November 5, 2014

Public Works, Economic Stabilization and Cost-Benefit Sophistry

I. Public Works and Economic Stabilization


This is where it all began. The National Resources Board's 1934 Report on National Planning and Public Works contained a radically different vision of the methods and purposes of conducting a cost benefit analysis than what has subsequently become the convention. This has profound conceptual (and possibly legal?) consequences for the supposed "economic optimization" of action to limit climate change.

John Maurice Clark was the NRB's economic consultant on the issue of "the use of public works as an economic stabilizing device." His findings provided the substantial basis for the report's Section II, Part 3 "Public Works and 'Economic Stabilization.'" A comprehensive report by Clark, Economics of Planning Public Works, was published in 1935 by the National Planning Board of the Federal Emergency Administration of Public\Works.

In Chapter Nine of his 1935 book, Clark introduced the (Kahn-) Keynesian multiplier into American economic discourse. This theoretical analysis provided a rationale for including the extended "secondary effects" of work relief in the calculation of project benefits. As the NRB report had noted: 
A second series of questions involves the relations of public works to economic stabilization and the emergency problem of work relief. What part can public works play in meeting the problem of business cycles and how far can these works be made an instrument for recovery?
This "second series of questions" was given a very high priority indeed by the Roosevelt administration in the context of the Great Depression. But for Clark the employment of labor that would otherwise have been idle was more than simply a secondary benefit of public works. It was the redress of a cost-shifting "externality" that resulted from the treatment of labor by employers as a variable cost that could be dispensed with during times of business slack. 

In his Studies in the Economics of Overhead Costs, Clark (1923) had argued that labor should be considered as an overhead cost of doing business rather than as a variable cost of the employing firm because the cost of maintaining the worker and his or her family "in good stead" has to be borne by someone whether or not that worker is employed:
If all industry were integrated and owned by workers, what would be the relation of constant to variable expense? ...it would be clear to worker-owners that the real cost of labor could not be materially reduced by unemployment.
One might argue that in a democracy, public works can be regarded as "integrated and owned by workers" and thus capable of restoring payment for the real cost of labor, if not by private employers then by the government -- which could then recover the outlay through taxation. Nor should it be assumed that Clark's attitude of reparation was not shared by the National Resources Board. The opening paragraphs of the report's foreword proclaimed in populist prose:
The natural resources of America are the heritage of the whole Nation and should be conserved and utilized for the benefit of all of our people. Our national democracy is built upon the principle that the gains of our civilization are essentially mass gains and should be administered for the benefit of the many rather than the few; our priceless resources of soil, water, minerals are for the service of the American people, for the promotion of the welfare and well-being of all citizens. The present study of our natural resources is carried through in this spirit and with a desire to make this principle a living fact in America. 
Unfortunately this principle has not always been followed even when declared; on the contrary, there has been tragic waste and loss of resources and human labor, and widespread spoliation and misuse of the natural wealth of the many by the few. [emphasis added]
The conservation movement begun a quarter of a century ago marked the beginning of an organized national effort to protect and develop these assets; and this national policy was aided in many instances by the individual States. To some extent the shameful waste of timber, oil, soil, and minerals has been halted, although with terrible exceptions where ignorance, inattention, or greed has devastated our heritage almost beyond belief.
So this, then, is the founding rationale for cost-benefit analysis, as different from today's market-appeasing conventions as chalk from cheese. And -- oh, yes -- it is THE LAW:
"...if the benefits to whomsoever they may accrue are in excess of the estimated costs, and if the lives and social security of people are otherwise adversely affected." --  Title 33 U.S. Code § 701a - Declaration of policy of the Flood Control Act of 1936
Refugees from the "1000-year flood" of the Mississippi River in 1937.

II. The Fallacy of Maximizing Net Returns

In the early 1950s, the mandate for giving prominent consideration to secondary benefits was effectively expurgated from federal government cost-benefit guidelines. The purge was carried out through two documents: the "Green Book," Proposed Practices for Economic Analysis of River Basin Projects, published in May 1950 and Budget Circular A-47 issued on December 31, 1952. Maynard Hufschmidt's (2000) chronicle of "Benefit-Cost Analysis 1933 - 1985" provides a useful overview of the sequence of events. Hufschmidt worked for the National Resources Planning Board, the Bureau of the Budget, and the Department of the Interior between 1941 and 1954.

Hufschmidt recounted that the Green Book's "treatment of the thorny issue of secondary benefits was at odds with the  practice of the Bureau of Reclamation" and it recommended that benefits "should be measured from the strict national economic efficiency point of view" rather than from the perspective of local or regional benefits. This controversial recommendation was not implemented by the concerned agencies.

John Maurice Clark was called upon again, along with two other economists, to adjudicate the issues in dispute between water resources agencies and the interagency subcommittee on benefits and costs. According to Hufschmidt, the panel of economists "recommended a cautious approach to including secondary benefits," which included separate reporting of primary and secondary benefits but did not rule out their use. [I have requested the Report of Panel of Consultants on Secondary or Indirect Benefits of Water-Use Projects through interlibrary loan and hope to elaborate on its analysis when I have had a chance to study it.]

The economic consultants' report was submitted at the end of June, 1952. Six months later it became a moot point as the federal Bureau of the Budget issued Budget Circular A-47, severely restricting the use of secondary benefits. Hufschmidt described A-47 as a "conservative document" that was regarded as imposing "severe restraint" on water projects:
The subject matter coverage was much the same as the Green Book; basically, it was a conservative document, which placed primary emphasis on economic efficiency-oriented primary benefits for project justification. The use of secondary benefits was severely restricted, an opportunity-cost concept of interest or discount rate, tied to the interest rate of long-term government bonds, was adopted, and a 50-year time horizon was established.  
Budget Circular A-47 was widely regarded by the water resources agencies and by the many proponents of water resources projects in Congress as a severe restraint on water projects. It served this purpose during the eight years of a relatively conservative Republican administration under President Eisenhower from 1952 to 1960, and was finally rescinded in 1962 in the early days of President Kennedy’s administration.
It doesn't need to be assumed that skepticism or caution regarding the evaluation of secondary benefits was unwarranted. Richard Hammond (1966) observed that the practices of the Bureau of Reclamation "brought benefit-cost analysis into disrepute in many quarters, particularly when agencies continued, in times of wartime boom and post-war 'full employment,' practices generated by the depression." On the other hand, the remedy pursued by the Green Book had its own problems, characterized by Hammond as "The Fallacy of Maximizing Net Returns" which the Green Book pursued as an "incontrovertible proposition":
The most effective use of economic resources is made if they are utilized in such a way that the amount by which benefits exceed costs is at a maximum rather than in such a way as to produce a maximum benefit-cost ratio or on some other basis... This criterion of maximising net benefits is a fundamental requirement for economic justification of a project. [emphasis added by Hammond]
"What seems to have happened," Hammond observed of the foregoing paragraph,"is that a familiar abstract proposition of economic theory, that rational conduct consists in balancing marginal cost against marginal gain, has been mistaken for a prescriptive rule of behavior applicable in any and all circumstances without qualification." Furthermore, he eventually explained, the maximizing mania ultimately boils down to substituting guesswork about one set of "opportunity cost" intangibles for other intangibles called "secondary benefit" and diminishing some of the speculative figures to an infinitesimal amount by the application of an arbitrary discount rate.

In defence of Clark's earlier formulations regarding secondary benefits, he was almost exasperating in his insistent qualification of cost and benefit estimates as judgemental and tentative. This contrasts with the maximalist language of pseudo-scientific precision exemplified by the Green Book's use of "words like measure, ascertain, and evaluate in contexts where estimate, expect, and guess would be more appropriate."

III. Kapp and Trade

In 2010 the U.S government's Interagency Working Group on Social Cost of Carbon (IAWG) presented its estimate of the social cost of carbon "to allow agencies to incorporate the social benefits of reducing carbon dioxide (CO2) emissions into cost-benefit analyses of regulatory actions that have small, or 'marginal,' impacts on cumulative global emissions." The IAWG's central estimate for the social cost of CO2 in 2010 was $21 in 2007 dollars, based on a 3% discount rate. One of the damages associated with an increased increment of carbon emissions in a given year is specified as "property damages from increased flood risk." Would the Flood Control Act of 1936 have any pertinence to their cost benefit analysis?

The Kaldor-Hicks compensation test constitutes a guiding principle for the selection of a discount rate for cost-benefit analysis, the IAWG report explains:
One theoretical foundation for the cost-benefit analyses in which the social cost of carbon will be used— the Kaldor-Hicks potential-compensation test—also suggests that market rates should be used to discount future benefits and costs, because it is the market interest rate that would govern the returns potentially set aside today to compensate future individuals for climate damages that they bear.
The Kaldor-Hicks test presumably allows the analyst to set equity considerations aside while evaluating the economic efficiency. Does it?

David Ellerman argues that the efficiency/equity distinction is simply an artifact of the choice of numeraire. In other words, the supposed efficiency of a policy outcome measured in dollars is an illusion created by the fact that efficiency is being measured with the "same yardstick" that was used to assign "value" to incommensurable things like human life, output of goods and services and damage to the environment. If one reverses the process and establishes human life or environmental damage as the unit of measurement, then the results of the analysis are also reversed.

Although simple, this is not an intuitively obvious argument, so Ellerman illustrates it with a very simple example in which John values apples at one dollar each, while Mary values them at 50 cents. Social wealth would be improved if Mary sells an apple to John for 75 cents. Under the Kaldor-Hicks criterion, social wealth would also be improved if Mary lost her apple and John found it, even though Mary receives no compensation. Kaldor-Hicks would deem this an efficiency gain because John could potentially compensate Mary by paying her 75 cents for the lost apple. Measured in apples, though, there has been no change in total wealth because Mary's lost apple exactly balances John found one..

But using apples as the unit of measurement changes everything. Since John values one apple at one dollar, he also values one dollar at one apple. Mary values a dollar at two apples.Measured in apples, social wealth would be improved if John lost a dollar -- worth only one apple to him -- and Mary, who values the dollar at two apples, found it. John's cost is smaller -- in apples -- than Mary's benefit. But since a dollar is a dollar, if the unit of measurement was dollars, the cost and the benefit would exactly balance leaving no net gain.

Ellerman's illustration may seem trivial but the "same yardstick" argument comes from Paul Samuelson who pointed out that, measured in money, the marginal utility of income is constant at unity. Bill Gates would value an extra $20 a week of income as much as a Walmart clerk would -- $20 dollars worth! It's a tautology.

Of course that's not the only problem with the IAWG's cost of carbon estimate. Moyer, Woolley, Glotter and Weisbach argued that the social cost of carbon estimates in the IAWG models are constrained by shared assumptions of persistent economic growth. Even a modest negative impact on productivity, they find, would increase social cost of carbon estimates by several orders of magnitude above the IAWG estimates.

Johnson and Hope found that assigning equity weights to damages in regions with lower incomes or using different discount rates generates social cost of carbon estimates two and a half to twelve times those of IAWG. Foley, Rezai and Taylor argued that the social cost of carbon and the relevant social discount rate are conditional on a specific policy scenario "the details of which must be made explicit for the estimate to be meaningful." There is also Martin Weitzman's analysis that the uncertainty about the prospect of catastrophic climate outcomes renders traditional cost-benefit analysis irrelevant.

Remember how we got to this analytical impasse, though? Clark's analysis of planning for public works and the National Resources Board report were concerned with the environment to be sure. But their sense of urgency was more particularly focused on the unemployment crisis. Controlling floods, reclaiming eroded agricultural land and replanting forests were viewed as ways to productively employ workers who would otherwise have to be given welfare or work at "leaf raking" make-work jobs. Public works were being considered as a way to smooth out the fluctuations of the business cycle and ameliorate the effects of cost-shifting due to employers accounting for workers as a variable, rather than a fixed overhead cost.

In February 2010, the U.S. official unemployment rate was 9.8%. The word "unemployment" doesn't appear in the 50-page IAWG report on the social cost of carbon. Nor do the words "recession," "jobs," "poverty" or "inequality" The word "labor" occurs several times but only in the context of an arcane footnote about "a method of estimating η using data on labor supply behavior." The "lives and social security of people" is given short shrift. "Growth," however, appears 34 times, about two-thirds of which refer to economic growth. In the IAWG report, one may conclude, economic growth is unrelated to employment of labor but closely correlated with "interest rate," which appears 20 times -- roughly the same frequency as "growth" in the economic context..

That's the problem right there.

In 1950, the same year the Green Book was curbing the use of secondary benefits in cost benefit analysis, Karl William Kapp's book The Social Cost of Private Enterprise was published, inspired by and elaborating on J. M. Clark's analysis of cost shifting. "As Kapp implied," remarked Joan Martinez-Alier, "from a business point of view, externalities are not so much market failures as cost-shifting successes." From that perspective, the IAWG's $21 a ton estimate of the social cost of carbon dioxide also may be better understood as an agenda-shifting success rather than a planning failure.

Eighty years ago, it may still have been possible to believe that those cost-shifting successes of business could be remedied through planning and public works conducted by a democratically-responsive government. Today, the role of government and the intention of cost benefit analysis is very different from what was professed in the foreword to the National Resources Board's 1934 report. How has that happened?
"Many difficult conceptual issues such as externalities, consumer surplus, opportunity costs, and secondary benefits that had troubled earlier practitioners were resolved and other unresolved issues, such as the discount rate, were at least clarified." -- Maynard M. Hufschmidt, "Benefit-Cost Analysis 1933-1985"
Just what are those "secondary benefits"? What are the "opportunity costs"? How did the difficult issues get resolved? And who cares?

What if I told you that "secondary benefits" was a cipher for "wages of labor," that "opportunity costs" was code for "return on capital investment" and that a significant portion of those supposedly sacrosanct financial "opportunities" result from cost-shifting? What if I pointed out that the "difficult issues" were "resolved" by declaring that wages were of little concern to public policy making but that profits were paramount?

Would you conclude with Hufschmidt that these difficult "conceptual" issues had been "resolved" or "at least clarified"? Or would you object that these are political, not conceptual, issues and that they have been suppressed and arrogated by the ideological framing and technocratic jargon of cost-benefit analysis? I leave the last word to John Maurice Clark:
"It comes down to this, that any use of labor that is worth anything at all is worth that much more than nothing. In that respect the socialist view of business depressions is correct and any rebuttal that attempts to explain away this fact by the reckonings of financial expenses is a bit of economic sophistry." Studies in the Economics of Overhead Costs (1923).

Tuesday, November 4, 2014

New Introductory Economics Textbooks

I haven’t used this blog to call attention to my new textbooks, Microeconomics: A Fresh Start and Macroeconomics: A Fresh Start, so let me do that now.  Where did they come from, what’s new about them, and who are they for?  In this post I will describe the concepts behind the books in a general way, and in future posts I’ll discuss particular things to look for in each of them.

1. Where did they come from?  They came from my own teaching, which has been intensive in introductory economics to an extent that I suspect few other teachers can match.  Because of my particular career history, virtually all my economics instruction has been at the intro level for the past 20 years.  For the past 16 years I’ve been at Evergreen, where most teaching is interdisciplinary and team-based.  I have taught introductory economics with natural scientists, philosophers, historians, sociologists, political scientists and cultural studies scholars.  Each time I have searched for different points of contact and tension between economics and these other fields.  In the process I’ve come to understand what makes economics distinctive, and to identify the assumptions and mental frameworks economists use that people with other backgrounds don’t.  Conveying what is specific to economics seems to me to be a big part of what introductory teaching should be about.

In addition, Evergreen’s pedagogy is steeped in critical thinking and inductive, problem-solving modes of learning.  Well before it was fashionable I was devising a wide range of workshops, labs, mini-projects and other activities for students to experience using economics and not just memorizing definitions and diagrams.  With this approach I simply couldn’t use any of the existing introductory economics texts.  They were all written in an authoritative voice: this is what you must believe.  They provided lots of models with sparse explanation; the expectation was obviously that the instructor would spend most of the available class time filling in the explanations the books left out.  They were based on the tabula rasa notion that students walk into the classroom with an empty head, waiting to have it filled up with “material”—rather than recognizing that students are people who come to education with a head already stocked with ideas, so that education has to be about new ideas meeting existing ones.

I had no choice but to begin writing my own book, slowly, a chapter at a time.

2. What’s different?  Basically, the differences fall into two categories.  First, as I’ve just described, the pedagogical model is entirely different.  Economics in these books is offered as an object of scrutiny, not a fount of unchallengeable wisdom.  Where economics differs in its assumptions and strategies of understanding, I simply put them beside the approaches of other disciplines and let students make up their own minds.  Evidence is presented not to demonstrate that some particular economic theories are “right”, but as a basis for critical thinking.  My premise is that the world economics tries to analyze is extremely complex, and no single theory will be right every time.  The best approach is case-based: identifying the kinds of situations where particular theories do a good job, as well as the ones were they tend to come up short.

Very important to the pedagogy is deep explanation.  I try to make every assumption explicit, every time.  This includes how the elements of a theory can (or can’t) be measured, why curves are drawn the way they are (and whether they could be drawn differently), and how you would know if the theory were wrong.  I talk about the history behind the various theories—when they were developed and what they were intended to accomplish.  I give extra examples.  My goal is to pack as much of the explanation as possible into the reading, so that more class time can be spent on activities, not lectures.

The explanation style also addresses itself to the ideas that, in my experience, students are likely to bring to the study of economics.  Where their experiences are relevant, I try to bring them in.  Where popularly-held economic ideas conflict with careful reasoning, I don’t hesitate to point it out.  Above all, I’m attentive to the problem of language, the way the same words may mean one thing in everyday conversation and another in a technical economics context.  You can see this, for instance, in words like “equilibrium” and “trade”.  It’s not that the student’s language is “wrong”, just that adjustments need to be made for the way words are used differently.

The other major difference is content, and here I have to describe the biggest, most consequential decision I made when writing these texts.  There have always been textbooks written to correct the “errors” of mainstream economics or to offer what their authors thought was a better theoretical framework.  I’m sympathetic to many of the ideas you can find in these books, and it’s stimulating to have a wide range of views, but in the end each such book has the same flaw: it’s my way or the highway.  These books replace the espousal of doctrine A by the mainstream books with the author’s preferred doctrine B.  They don’t invite a critical reading, and invariably you (the instructor) are going to find that there’s a lot of B that you just can’t buy into.  In any event, none of these books has really caused the profession as a whole to rethink how it approaches the introductory curriculum.

So I decided I would not try to fix economics.  These books do not represent my personal take on what’s wrong with economics as it is and how it ought to be reformed.  I simply don’t go there.  (Except on occasions when I can’t help it.)  Rather than the gap between my ideal economics and actually existing economics, I chose to focus on the gap between economics as it is currently practiced by economists and the way it is presented to introductory students.  That gap is huge, and it gives me a lot to work with.

Why is it so huge?  I think two factors have coincided.  On the one hand, economics has evolved considerably over the past two decades or so.  It has become much more empirical, more interested in institutions, and somewhat more realistic about human behavior.  There is also more willingness to entertain models with unconventional features, like multiple equilibria, speculative bubbles, increasing returns and imperfect competition.  The rate of change may be less than what some of us wanted, but it has been substantial all the same.  On the other hand, however, there is immense inertia in the textbook market.  Part of this is our fault: intro teachers want to recycle their notes, exercises and exams.  They want to teach more or less the same course this year they taught last year, with the occasional tweak to liven it up.  Of course, there are few incentives for most economists to knock themselves out reinventing the intro course.  But a lot of the blame also  has to be put on the textbook publishers.  A modern commercial textbook is a behemoth, the product of an immense army of editors, graphics people, marketers and other staff.  It’s a big bet in a big lottery.  And just like Hollywood, the publishers try to produce blockbusters by slightly varying the formula of last year’s blockbuster.  There’s a saying—I think attributable to David Colander—that intro texts have to abide by the 15% rule: no book can be more than 15% different from the others and still see the light of day.

This explains the title of my two books.  They are written from the ground up to reflect, as best as I am able, the state of economics today, as if no previous textbook had ever been written.  They are fresh starts.

3. Who are they for?  I think there are three potential audiences.  First, many economists may be feeling frustrated with the existing array of texts.  They are embarrassed by teaching subject matter that their discipline has largely moved on from, while not covering key concepts that underlie contemporary research.  Some may be swayed by high-profile student protests, demanding more relevance and a less doctrinaire attitude in undergraduate economics.  They may also be gravitating to the critical thinking–active learning paradigm in teaching as they see colleagues in other disciplines doing.  These economists may be willing to adopt a new kind of textbook in spite of the obvious costs in time and discarding prior investments.

The second group is heterodox economists.  If they are looking for a textbook that trumpets their particular brand of heterodoxy they will be disappointed.  But they may be content with a book they don’t have to teach against.

The third group is not necessarily academic at all.  It consists of people who are curious about what economics is up to in the post-2008 world and would like to read a literate, open-minded but comprehensive account.  I’ve tried to make these texts engaging—not in the disjointed manner of the collage-style products of the big commercial houses (which seem to be in a permanent state of fighting off boredom), but in the form of a narrative, the way a good science journalist presents science.  Maybe there are readers out there who are eager for a pair of books about contemporary economics that doesn’t shy away from the technicalities, but also looks at it from a broader cultural, political and historical perspective.

Monday, November 3, 2014

Science Communication

There’s an interesting article on Dan Kahan in today’s Chronicle of Higher Education.  Kahan, for those who don’t know, is the current guru of science communication.  His updating of the work of Mary Douglas is valuable; I always liked Douglas even though I found her politics unappealing.  There is some Douglas in my earlier writing on risk norms (for instance in Markets and Mortality).

In general, I think Kahan is right that tribal affiliation is a major impediment to communicating the science of climate change.  This is a problem for both sides—not only for denialists who see themselves fighting to save the free market against enviro-crypto-socialists but also the enviros who see the climate problem as a vindication of their disdain for economic growth and materialistic values.  Now how much the economy should be regulated and what sorts of values contribute to a good life are important questions and deserve all the attention and reflection we want to give to them, but the tribal identities people have developed around them can only get in the way of dealing with climate change in a rational fashion.

In other words, this isn’t just about anti-science Republicans or Koch-funded denialism (which do exist), but also environmentalists using the climate to act out their moralism.  Big cars are b-a-a-a-d.  TV is b-a-a-a-d.  Trying to make money is b-a-a-a-d.  Climate change is nature’s revenge on humans for all that badness.  (See more venting about green moralism here.)

It’s difficult to get green folks to stop doing this because they get reinforcement from their tribe (including their inner tribe) every time they denounce the moral turpitude of the other side.  But they should see that it’s not working and stop.

One other, more specific thought about science communication: scientists spend their life honing observation and measurement.  For them, science consists of devising and applying new methods for identifying, documenting and measuring our world.  When they think of communication, they have in mind explanations of what empirical results they’ve established and the methodological basis for them.  The assumption seems to be that public understanding of science takes the form of an unordered collation of facts about observations and methods.  We have ice cores!  Here are fluctuations in CO2 concentrations in the atmosphere over the past 150,000 years!  Here is the change over 50 years in the range of a beetle that damages spruce forests!  Here is our best estimate of the relationship between global temperatures, thermal expansion and sea level rise!

But that’s not really how people think themselves through complex issues.  Rather, they need stories, and science succeeds when it supplies a compelling story that provides a structure for individual facts.  The story doesn’t have to be perfectly correct in all its particulars, just true enough that it does the job.  You can qualify it as you go deeper.

This is why I cringed when I read the latest synthesis report from the IPCC.  Of course, there is nothing scientifically wrong about their brief distillations of research into the various aspects of greenhouse gas emissions, carbon response and the like.  But this is not a story.  What I tried to do here, here and here was a stab at a story.  I’m not a scientist (just an economist), and my presentation was radically simplified even by my standards, but something like this is what might reach people who do not do science for a living.

Saturday, November 1, 2014

Opportunity Costs and Secondary Benefits

"Many difficult conceptual issues such as externalities, consumer surplus, opportunity costs, and secondary benefits that had troubled earlier practitioners were resolved and other unresolved issues, such as the discount rate, were at least clarified." -- Maynard M. Hufschmidt, "Benefit-Cost Analysis 1933-1985"
And they all lived happily ever after... (in the Cost-Benefit fairy tale, that is).

What are secondary benefits? What are opportunity costs? How did the difficult issues get resolved? And who cares?

What if I told you -- just for argument's sake, mind you -- that "secondary benefits" was a cipher for "wages of labor" and that "opportunity costs" was code for "return on investment"? What if I pointed out that the "difficult issues" were "resolved" by declaring that wages were of little concern to public policy making but that profits were paramount? Would you care about secondary benefits, opportunity costs and how those difficult issues were resolved?

Economists generally don't. At least not until now anyway.

The Sandwichman has scheduled an EconoSpeak blog post for Wednesday, November 5, titled "Unemployment, Interest and the Social Cost of Carbon" "Public Works, Economic Stabilization and Cost-Benefit Analysis" that doesn't quite go as far as the "what if" scenarios above. It explores highlights of the untold story of CBA from the New Deal to today's climate change policy "Integrated Assessment Models."

But keep those "what if" scenarios in mind. Cost-Benefit Analysis is about class struggle, the rules for conducting that struggle and which class makes the rules.

Tuesday, October 28, 2014

A Beginner’s Guide to Probability Distributions, Risk and Precaution

Coincidences abound.  Last night I gave a lecture to my Cost-Benefit Analysis class on uncertainty and precaution, and this morning I see a writeup of a new article by Nassim Nicholas Taleb and his high-profile colleagues on the application of precautionary theory to genetically modified organisms.  One concern I had skimming through the article is that it seems to parallel Martin Weitzman’s Dismal Theorem, but he isn’t cited.  I don’t know the literature well enough to say anything about priority in this area, and I’d be happy to hear from those who do.

Meanwhile, on with the show.  I will leave out the diagrams because they take too long to produce.

1. A convenient property of the normal distribution.  Consider a normal distribution—any normal distribution.  What’s the probability you will be to the right of the mean?  50%.  To the right of one standard deviation (σ) above the mean?  About 1/6.  To the right of two σ’s above the mean?  About 2.5%.  To the right of three σ’s above the mean?  Less than .5%.  This is simply the Empirical Rule.  It tells us that the probability of an above-average outcome falls faster than the distance of that outcome from the mean increases.  That continues to the very asymptotic end of the distribution’s tale.  Of course, the same reasoning applies to the other side of the distribution, as outcomes become ever further below-average.

In an expected value calculation we add up the products of possible future outcomes with their respective probabilities.  For two possible outcomes we have:

EV = V(1) p(1) + V(2) p(2)

where V(1) is the value of outcome 1, p(1) its probability and so on with outcome 2.  In other words, EV is simply a weighted average of the two potential outcomes, with their probabilities providing the weights.  As more possible outcomes are envisaged, the EV formula gets longer, encompassing more of these product terms.

The significance of the empirical rule for EV calculation is that the further from the mean a possible outcome is, the smaller its product term (value times probability) will be.  Extreme values become irrelevant.  Indeed, because the distribution is symmetrical, you would only need to know the median value, since it’s also the average.  But even if you didn’t know the median going in, or if you have only an approximation to a smooth distribution because of too few observations on outcomes, if you know the underlying distribution is normal you can pretty much ignore extreme possibilities: their probabilities will be too small to cause concern.

2. But lots of probability distributions aren’t normal.  The normal distribution arises in one of the most important of all statistical questions, the relationship between a sample and the population from which it’s drawn.  Sample averages converge quickly on a normal distribution; we just need to get enough observations.  That’s why statistics classes tend to spend most of their time on normal or nearly-normal (binomial) distributions.

In nature, however, lots of things are distributed according to power laws.  These are laws governing exponential growth, and much of what we see in the world is the result of growth processes, or at least processes in which the size (or some other measure) of a thing in one period is a function of its size in a previous period.  In economics, income distribution is power-law; so is the distribution of firms by level of employment.  Power law distributions differ in two ways from normal ones: they are skewed, and they have a long fat tail over which the distance from the mean increases faster than probability declines.  If you want to know the average income in Seattle you don’t want to ignore a possible Bill Gates.
In many decision contexts, moreover, we don’t have enough observations to go on to assume they are normally distributed.  Instead we have a t-distribution.  The fewer observations we draw on, the longer and fatter are the tails of t.  True, the t-distribution is symmetrical, but, with sufficiently few observations (degrees of freedom), it shares with power law distributions the characteristic that extreme values can count more in an expected value calculation, not less as in a normal distribution.
3. Getting Dismal.  The relationship between EV and extreme values depends on three things: whether the probability distribution is normal, if not how fat the tail is, and how long the tail is.  Weitzman’s Dismal Theorem says that if the tail is fat enough that the product (value times probability) increases as values become more extreme, and if the tail goes on to infinity—there is no limit to how extreme an outcome may be—the extreme tail completely dominates more likely, closer-to-the-mean values in calculations of EV.  The debate over this theorem has centered on whether the unboundedness of the extreme tail (for instance the potential cost of catastrophic climate change) is a reasonable assertion.

4. Precaution, and precaution on precaution.  This provides one interpretation of the precautionary principle.  On this view, the principle applies under two conditions, a high level of uncertainty and the prospect of immense harm if the worst possibility transpires.  High uncertainty means a fat tail; immense potential harm (for which irreversibility is usually a precondition) is about the length of the tail.  Enough of both and your decision about risk should be dominated by the need to avoid extreme outcomes.

This view of precaution is consistent with cost-benefit analysis, but only under the condition that such an analysis is open to the possibility of non-normal probability distributions and fully takes account of extreme risks.  That said, the precautionary framework described above still typically translates uncertainty into statistical risk, and by definition this step is arbitrary.  For instance, we really don’t know what probability to attach to catastrophic releases of marine or terrestrial methane under various global temperature scenarios.  Caution on precaution is advised.

UPDATE: I pasted in some images of probability distributions from the web.

Corporate Inversions and the Revolving Door of Tax Attorneys

My inbox had some long winded story from Bloomberg BNA about some candid remarks from a tax attorney names Hal Hicks on how corporate inversions became such a hot topic:
Hicks epitomizes the world of high-level Washington lawyers who have played a behind-the-scenes role in helping these tax-driven address changes proliferate. Top federal tax officials, many of them career corporate lawyers, have sometimes closed tax breaks only after companies slipped through them. And former officials like Hicks use skills and contacts honed in office to help companies legally outmaneuver the government. Until this year, when address-shifting by more than a dozen companies worth $100 billion caught policy makers’ attention and President Barack Obama clamped down again, inversion rules had for a decade attracted little notice outside the small community of international tax lawyers in Washington. At the Treasury Department and the Internal Revenue Service, officials—many on hiatus from private practice—crafted the rules in dialogue with top corporate law and accounting firms. While some European nations have historically relied on career civil servants, the top ranks of the U.S. tax administration have swapped staff with industry for decades. It’s a low-cost way to provide government with the best legal talent, said Gregory Jenner, a former acting assistant Treasury secretary, who calls it an “incredibly beneficial tradition.” “Putting rookies into these jobs—they would be overwhelmed,” Jenner said. “It’s too high-level, too sophisticated, too complicated.” The risk, critics say, is that some government lawyers may continue to sympathize with corporate interests, or be swayed by former colleagues.
I can see some conservatives reading this and saying that this is due to the overly complex nature of U.S. tax laws governing multinationals. I can see some liberals reading this and saying this is what we get when we let the representatives of multinationals write our tax laws. I would say – they are both right.

Monday, October 27, 2014

Daisy Chain Time Travel Macro

For some reason, my comments never show up on Simon Wren-Lewis’ blog, Mainly Macro.  Maybe they were not meant to be.  But today I will use this site as a soapbox to reply to his (and Nick Rowe’s) argument that public borrowing can impose a burden on future generations.

You can read the original, but the basic idea is that lending money is a form of deferred consumption that wends its way through time like a daisy chain.  People live for two periods, with overlapping generations.  They buy bonds during the first period and sell them during the second.  Thus in each period the debt is neatly handed off to the following generation.  But there is an end time, when public debt must be retired.  At that point, instead of allowing the final generation, in the bloom of period 1, to purchase and thereby rollover the debt of their ancestors, the government taxes them to retire it.  So behold, the borrowing of government from generation the first is a delayed charge against generation the last.  And that is why paygo pension systems are an intergenerational crime.

The logic is impeccable, in the sense that if you accept the premises you must accept the conclusion.  The question is whether the premises correspond in any meaningful way to the world we inhabit.

One obvious problem is the assertion of an end time.  The Greatest Generation, as we know, ran up what was at that point the Greatest Debt; in fact, gross federal debt (including the portion held by the Fed) topped out in 1946 at just under 120% of GDP.  Those living today are heirs to that borrowing “binge”.  But we haven’t suffered for it, since (1) that specific chunk of debt has become much smaller in relation to our incomes today due to inflation and real growth, and (2) we continue to roll over principle and interest, since the end time is not nigh.  As long as we don’t go crazy, and keep our current and future borrowing on a sustainable basis, the end time need never come.  (And I’m abstracting, as Simon and others do, from the benefits financed by borrowing—like saving the world from Hitler or, more mundanely, all those nice CCC-built parks—that are also legacies for the future.)

The second is less recognized.  The consumption-smoothing life plan at the heart of the standard OLG model, simply does not reflect the facts.  Here, for instance, is the 2011 average household net worth (not including home equity) by age of household head, as estimated by the Census Bureau:

Except for those over 75, older people have more net income-generating assets than younger people, and even the geeziest geezers hold more assets than those under 45.  They die with their financial boots on, making the daisy chain of deferred consumption a false depiction.

The bottom line is that the generation is not a meaningful unit of accounting when it comes to the distributive effects of public deficits.  How about shifting attention to the decision to sell bonds to the rich instead of taxing them?

Frack = Defect

The New York Times today has an informative article on BASF, the German chemical giant, centered on the effect that fracking in the US has on business decisions in Europe.  To sum up, natural gas prices in the US have plummeted due to the widespread use of this dubious technology, which affects chemicals in two ways—lowering the cost of fossil fuel feedstocks and the energy needed to process them.  BASF has responded, logically enough, by shifting new investment from Germany to cheaper energy locations, including the US.

But this has an effect on energy policies in Europe too.  It shifts the political economic balance away from a decarbonizing energy transition (Energiewende), which raises costs there even as they are tumbling here.  In other words, by pushing fracking and generally supporting (non-coal) fossil fuel production, the Obama administration is undercutting foreign efforts to respond to the climate crisis.  In the global collective action game of planetary sustainability, the US is a defector.

We are unlikely to see a global agreement on reducing fossil fuel use in the next several years; what can be done to at least protect the space for effective action at the national level?  At the top of the agenda should be a framework for carbon tariffs, border taxes that offset cost differences due to differences in carbon emission policies.  This would involve at the least a legal framework; ideally it should also spell out tariff-setting formulas to reduce the scope for manipulating the system to serve other ends.  If we can’t get everyone to cooperate on sensible action to forestall catastrophic climate change, at least we should try to limit the damage caused by defection.

This Just In!

I understand Governor Christie has relented somewhat on his policy of quarantining all passengers from West Africa, symptomatic or not, in tents. He is now allowing them, if they choose, to spend 21 days stuck in traffic at a bridge. What a mensch!

Sunday, October 26, 2014

"Cake without Flour" -- Duncan Foley on the Dilemmas of Economic Growth

The following excerpt is from Duncan Foley's outgoing Presidential Address to the Eastern Economics Association, "Dilemmas of Economic Growth," presented March 9, 2012 (Reprinted by permission from Macmillan Publishers Ltd: Eastern Economic Journal (2012) 38, 283–295 published by Palgrave Macmillan). The title is an allusion to Herman Daly's parody of Cobb-Douglas production function hyperbole "as implying that it is possible to bake a cake without eggs or flour as long as the cook whisks the empty bowl faster and faster."

CAKE WITHOUT FLOUR


Some growth economists might regard the considerations we have just reviewed as rather quaintly anachronistic in putting so much emphasis on the material nature of economic production. Well-established patterns of economic growth show that as incomes rise, the proportion of output as measured by such indexes as real GDP consisting of material goods steadily declines. The major sources of growth in incomes (and, given the way we measure GDP, in indexes of output) shift to the tertiary sector, particularly services. The chief input to services is human intelligence, and at least in some accounts, intelligence is an unlimited resource. So why couldn't real GDP, measured to include the use-value of services, continue to grow without limit?

There are some immediate problems with this conception. Strictly speaking the production of almost all services does require material and energy inputs, as the gigantic server farms required for information technology are a concrete reminder. Maintaining the human capital to provide a glittering array of intellectual services requires material and energy inputs, and these very likely increase as the quality of intellectual output rises.

But this vision of endless growth without material or energy inputs requires some re-examination of just what it is that we regard as output and try to measure in indexes like real GDP. Some rapidly growing service industries, such as finance, seem to be able to produce increasing measured output without much input increase, even of human employment, at all [Basu and Foley 2011; Foley 2011]. An examination of the issues raised by the growing significance of service industries, which have no measurable output, raises some deep questions about the conception of economics.

The paradigmatic economic interaction for economic theories rooted in the marginalist revolution, such as neoclassical economics and its various descendants, is a transaction in which one good moves from the possession of an agent who subjectively values it less to the possession of another agent who values it more, in exchange for another good (in many transactions money). As the familiar Edgeworth-Bowley box construction illustrates, this type of transaction puts both agents on a higher (or at least no lower) indifference curve, and thus achieves a Pareto-improvement in the allocation of existing resources. Many financial transactions are of this type, for example, initial public offerings to take companies public, real estate brokerage, insurance contracts, and other more exotic forms of financial arbitrage. It is important to remember, however, that the transfer of existing goods or assets in these transactions is not production. When financial intermediaries appropriate some part of the economic surpluses generated in these transactions as revenue, however, economic statisticians have felt compelled to regard the resulting incomes as part of national income, and to invent an imaginary product, financial services, to put on the product side of the accounts as a counterpart.

It is hard to imagine limits to the magnitude of subjective economic surpluses that could be realized through transactions of this type. If, for example, policies or the historical evolution of the division of labor increase economic insecurity by eroding the institutions of traditional societies, one can easily imagine an unlimited expansion of insurance transactions as a result. But from the point of view of classical political economy, it is the increase in material productivity of labor, not the increase in economic insecurity associated with the expansion of the division of labor, which is the source of improvements in economic welfare. This point of view is deeply embedded in the methods of national income accounting, for example, in the fundamental rule that transactions involving the transfer of existing assets do not constitute production of goods and services, no matter how much economic surplus they may represent.

The classical political economists and Marx addressed these issues through the concepts of "productive" and "unproductive" labor. In the version of this distinction, Marx distilled from his critical review of Adam Smith, productive labor (whether it produces material goods or services, since providing haircuts is hard to distinguish from making hats) returns the costs of production with a profit, while the cost of unproductive labor is paid out of revenues without any recovery or return. This classical-Marxian line of thinking puts the origin of the incomes from the production of "services," such as finance, in a different perspective.

This perspective is perhaps most clearly articulated in Marx's analysis of wage labor and the origins of surplus value. Productive labor is responsible for the whole value added in production, but receives only a fraction of the value added in the form of the wage. The resulting surplus value constitutes a pool of potential revenue for which capitalist producers, landowners, intellectual property owners, financial firms controlling money capital, and the state compete. The implications of this analysis, which, unfortunately, is for the most part systematically excluded from the modern economics curriculum, are far-reaching. No particular capitalist firm, no matter how large in revenue and employment, can have much direct effect in increasing the pool of surplus value. Thus "money-making" in capitalist society is proximately based on taking surplus value away from others. In an economy where resources and intellectual property command enormous rents, there may be a vanishingly small connection between the revenue of any entity and its actual contribution to production of useful output.

Many people today are dazzled by the apparently magical ability of innovators to appropriate enormous revenues on the basis of ideas and their manipulation alone. This phenomenon has understandably spawned theories of a "new" economy, supposedly based on new principles of the creation of value. Classical-Marxist political economy, in contrast, locates incomes to innovation not in new principles of the creation of value, but in new (or newly important, since most of these "business models" have actually been around for a long time) modes of appropriation of surplus value. As Slavoj Žižek vividly points out, increasing returns in the appropriation of rents for intellectual property simultaneously obscure the origin of the resulting enormous incomes in the pool of surplus value appropriated from productive labor and mystify the factors behind the increasing inequality in the distribution of these revenues [Žižek 2012]. The origin of the rent of a particularly exploitable resource like a waterfall or a petroleum deposit is hard enough to understand, but at least the owner of a waterfall cannot allow an unlimited number of cotton mills to exploit the resulting usable energy. By contrast, the owner of the rights to distribute a piece of software that, due to network externalities, becomes a technical standard, can allow an effectively unlimited number of users to install the software and charge each of them a fee.

It would be, however, a peculiar political economy that convinced itself that the increasing returns in the rents to artificially created assets, such as systems software, were a remedy for thermodynamically imposed decreasing returns to resource use in material production.

Friday, October 24, 2014

Optimization and Its Discounts

Trying to reconcile cost shifting with the discounting of future climate change costs and benefits has taken me on some unexpected detours. I was initially thinking about bills of exchange and their role in the early modern era of concealing church-outlawed "usury" in the guise of a more palatable commercial transaction. Discounting was an arithmetical accounting exercise that arose out of the discounting of bills of exchange.

Both compound interest and discounting partake of the same exponential function -- from different ends of the calculation -- so it is easy (and misleading) to think of the discounting of a bill of exchange as a kind of loan. Discounting a bill of exchange is a sales transaction. The credit involved is commercial credit extended from a supplier to a purchaser. The bank then buys the bill of exchange from the supplier at a discount from its face value.

If one insists on seeing a loan from the banker in the transaction, it would only be an indirect loan to the purchaser of the goods, not to the supplier who sold the bill of exchange to the bank. But that loan would be secured by the goods that were the original object of the transaction that originated the bill of exchange... (Unless, that is, the bill of exchange was only speculative, a circumstance that Marx labeled a swindle.)

The important point is that bills of exchange originated in real transactions of goods, not in purely financial transactions. This has serious implications for the use of "discounting" in cost benefit analysis of public investments.

If the discount rate is meant as a metaphor it is a peculiarly bad one. The goods in question -- costs and benefits of climate change mitigation, for example -- have both negative and positive values but more importantly they have not been contracted for by the interested parties -- there is no "bill of exchange" to be discounted. Furthermore, the beneficiary of the discounted price is not society but the polluting firm who has shifted part of its costs to society and the environment. This perverse distribution of costs and benefits (and incentives) is concealed by the aggregate generality of the climate economy models that construe everything as one big happy economy.

Put it this way: discounting the future costs and benefits of greenhouse gas emissions provides a subsidy to the most prolific emitters of greenhouse gases that they can then reinvest at compound interest. This is hardly a matter of being "neutral" on questions of distribution. Nor is it a question of generational equity. This is simply taking the bankers' perspective on financial accumulation and proclaiming it "socially optimal."

The Passing of Fred Lee: An(other) Old Wobbly Bites The Dust

Last night (Oct. 23) at 11:20 PM, CDT, prominent heterodox economist, Fred Lee of the University of Missouri-Kansas City, died of cancer.  He had stopped teaching during the last spring semester and was honored at the 12th International Post Keynesian Conference held at UMKC a month ago. While I do not know if he was a card-carrying member of the IWW, as was a friend of mine, Bill Grogan, who died over a month  ago and about whom I blogged here then; on more than one occasion, including at this conference at UMKC last month, I heard Fred called an "old Wobbly," and I never heard him dispute this description. For any who do not know, "Wobbly" has always been the nickname for a member of the Industrial Workers of the World (IWW), a pro-working class universal union anarcho-syndicalist group.

Whatever one thinks of heterodox economics in general, or of the views of Fred Lee in particular, he should be respected as the person more than any other who was behind the founding of the International Conference of Associations for Pluralism in Economics (ICAPE), and also the Heterodox Economics Newsletter.  While many talked about the need for there to be an organized group pushing heterodox economics in all its varieties, Fred did more than talk and went and organized the group and its main communications outlet.  He also regularly and strongly spoke in favor of heterodox economics, the unity of which he may have exaggerated.  But his voice in advocating the superiority of heterodox economics over mainstream neoclassical economics was as strong as that of anybody that I have known.  I also note that he was the incoming President for the Association for Evolutionary Economics (AFEE), and they will now have to find a replacement.  He had earlier stepped down from his positions with ICAPE and the Heterodox Economics Newsletter.

It was both sad and moving to see Fred at the PK conference last month in Kansas City.  He was in a wheelchair with an oxygen tank, with his rapidly declining health condition stunningly apparent.  There were several sessions honoring his work.  However, at one of the major ones, he spoke at the end. Although he was having trouble even breathing and could barely even speak, he rose and made his comments, at the end becoming impassioned and speaking up forcefully to proclaim his most firmly held positions.  He declared that his entire career had been devoted to battling for the downtrodden, poor, and suffering around the world, "against the 1% percent!" and I know that there was not a single person in that standing room only audience who doubted him.  He openly wept after he finished with those stirring words, as those who were not already standing rose to applaud him with a standing ovation.

Fred's own research agenda focused on developing a heterodox microeconomics, one based on the idea of markets being dominated by oligopolistic firms with price-setting powers and more.  In the Post Keynesian camp he drew heavily on the work of Alfred Eichner as well as Michal Kalecki, although he was also influenced by American Institutionalists such as Gardiner Means, hence his Presidency-Elect of the Old Institutionalist AFEE.  He wrote on many other topics as well, and in more recent years on the broader issue of the meaning and application of heterodox economics and how to develop a coherent alternative heterodox economics.  But his most famous work was and will probably remain his work on a heterodox, arguably Post Keynesian, approach to micreoeconomics.

At this point I must note that while we were always friends, and I knew Fred for a long time, we had some fairly strong differences of opinion in recent years.  A decade ago, I with David Colander and Ric Holt, wrote a book and an article, followed up by another book and some other articles, the first book being _The Changing Face of Economics: Conversations with Cutting Edge Economists_ (2004, University of Michigan Press) and the first article being "The Changing Face of Mainstream Economics" (Review of Political Economy, 2004).  We argued that "mainstream" is a sociological category, those running the show in the profession (top departments, journals, etc.), while "orthodox" is an intellectual category, the hardline version of which is widely called "neoclassical economics."  We argued that "heterodox" was both: not running things and also intellectually anti-orthodox.  This opened the door for a category of "non-orthodox mainstream economists," with people like George Akerlof being possible examples.  Several heterodox economists disagreed with this argument and viewed us as weakening the criticism of "the orthodox mainstream" with this sort of divisionist argument, and quite a few of those expressed their disagreements in print, with there actually being an entire book dedicated essentially to reading the riot act on us as a bunch of namby-pamby wafflers or worse.  Fiercest of all in this crusade, both verbally and in print, was good old Fred Lee, who saw us as undercutting and undermining and demoralizing the movement for a unified and strong heterodox economics battling that "orthodox mainstream."

I note that at the meeting in Kansas City I stood up to speak about this and to praise what I considered to be the strong and principled position held by Fred, despite our disagreements.  I also spoke to him privately afterwards, and we parted on friendly terms.  However, I note that he laid out in his public remarks a distinction between a "heretic" and a "blasphemer," both of these terms positives for him.  A heretic is someone who questions orthodox doctrine, but still at some level believes it, while a blasphemer is someone who utterly and totally rejects it.  He told me in our final private conversation that he viewed me as being a mere heretic, while he was a true blasphemer.

RIP, Fred.

Barkley Rosser

Addendum:  The book criticizing Colander, Holt, and me is "In Defense of Post-Keynesian and Heterodox Economics: Responses to Their Critics," ed. by Fred Lee and Marc Lavoie, 2012, Routledge.   In effect the bottom line may boil down to our saying that the heterodox can be the source of cutting edge ideas that the mainstream sometimes adopts, such as behavioral economics, whereas they say that any idea that can be accepted by the mainstream is simply being coopted, and that the heterodox must overthrow the mainstream orthodoxy root and branch.  This may be what separates "heresy" from "blasphemy."

Further Addendum:  I have been informed by email from Steve Ziliak, a former colleague of Fred's from when he was at Roosevelt University in Chicago, that like my late friend Bill Grogan, he was a card-carrying member of the IWW from 1985, and that indeed he became the Chair of the General Executive Council, with the IWW's national HQ in Chicago.  As a result of that and at that time, he ended up becoming the recipient and owner of the ashes of Joe Hill, which had apparently gone on some long odyssey.  But, given that Joe Hill was an honest-to-gosh Wobbly, maybe the most famous of them all aside from Big Bill Haywood, the IWW ended up getting at least some of his ashes, and it was Fred who was theiir overseer, at least for some time.
.
A link from Steve Ziliak to see Fred signing for Joe Hill's ashes on 11/18/1988, http://reuther.wayne.edu/node/12333 .

Been Discounted So Long It Looks Like Up To Me

The monks ascending the steps on the outside of the wall are growing the GDP, while the monks descending the steps on the inside are abating carbon dioxide emissions. Climate change mitigated -- emissions decoupling accomplished!


"Business profit," Schumpeter tells us, "is a prerequisite to the payment of interest on productive loans... The entrepreneur is the typical interest payer." There are three cost-reduction strategies that firms may pursue to maximize profits. The most opportunistic is cost-shifting, in which some third party, society or the environment gets stuck with the cost rather than the firm. The cost doesn't go away, it just becomes external to the accounting entity's balance sheet and thus is an "externality." Greenhouse gas emissions are such an externality. They are a cost-shifting success for the profit maximizing firm.

Carbon trading schemes and Pigouvian taxes are supposed to "internalize" those externalities so that the users of fossil fuels, for example, are made to pay the full cost -- or at least a larger proportion of the cost -- of their production processes or consumption preferences. Assessments of the costs and benefits of such policies typically discount the present value of future costs and benefits. The appropriate discount rate, it is often argued, should reflect market interest rates or else it may result in spending that is less efficient than would occur through the market. William Nordhaus in A Question of Balance:
The choice of an appropriate discount rate is particularly important for climate-change policies because most of the impacts are far in the future. The approach in the DICE model is to use the estimated market return on capital as the discount rate. The estimated discount rate in the model averages 4 percent per year over the next century. This means that $1,000 worth of climate damages in a century is valued at $20 today. Although $20 may seem like a very small amount, it reflects the observation that capital is productive [S'man: no, it reflects the assumption that capital is "productive"]. Put differently, the discount rate is high to reflect the fact that investments in reducing future climate damages to corn and trees should compete with investments in better seeds, improved equipment, and other high-yield investments. With a higher discount rate, future damages look smaller, and we do less emissions reduction today; with a lower discount rate, future damages look larger, and we do more emissions reduction today.  
Update:  But... if profitability is a function of cost shifting, the market interest rate a function of profit, the discount rate a function of the market interest rate and cost/benefit optimization of GHG abatement a function of the discount rate, doesn't said optimization embed a circular reference? No, this is both too simple and too forgiving an interpretation of the relationship between discounting and cost shifting. More on this soon...

Nordhaus, again:
In thinking of long-run discounting, it is always useful to remember that the funds used to purchase Manhattan Island for $24 in 1626, when invested at a 4 percent real interest rate, would bring you the entire immense value of land in Manhattan today. 
Professor Nordhaus here simply updates and tones down the hallucinations of Dr. Richard Price, who exclaimed in 1774:
One penny, put out at our Saviour's birth to 5 per cent compound interest, would, before this time, have increased to a greater sum, than would be contained in a hundred and fifty millions of earths, all solid gold. 
As Marx began the chapter in Capital in which he cited Price's dazzled fancy:
The relations of capital assume their most externalised and most fetish-like form in interest-bearing capital. We have here M — M', money creating more money, self-expanding value, without the process that effectuates these two extremes. 
In his discussion of discounting, Nordhaus doesn't distinguish between compound interest and the process that brings about the apparent productivity of capital that he extols. What makes this lack of distinction particularly telling is that he is supposedly discussing solutions to a problem that results from the very process that makes capital productive of profits sufficient to sustain interest payments on money capital. It is as if the greenhouse gases are unrelated to the industrial processes that emit them.

Compound interest does not emit greenhouse gases. What people do to make the profits to pay the compound interest does. Money capital does not compound itself. The discount rate is no more independent of the cost-shifting that engenders it than it is of the greenhouse gas emissions whose costs are being shifted. D.I.C.E. thrown will never annul chance.

Wednesday, October 22, 2014

Helicopter Money in the Midst of a PLOG

Mark Thoma and his readings have pulled together a nice collection of writing on a concept known as “helicopter money”. To be honest, as I read all the links I decided to fire off my own comment which needs a little refining. My opening line is simple:
Helicopter money means using fiscal stimulus with easy money to overcome one awful shortage of aggregate demand noting the following well established ideas.
PLOG is a Paul Krugman term for prolonged large output gap, which has been the current situation since 2008. This period has also been described as a liquidity trap where fiscal stimulus is clearly needed as traditional monetary policy has done all it can do and we still are in a PLOG. This naturally leads to my first well established proposition: We should be using fiscal policy that maximizes the bang for the buck. Which leads me to the rest of my rant:
(1) Transfer payments for the poor does so by giving income to people most likely to spend it; (2) Payments to the rich or tax cuts for the rich have no bang but a lot of bucks (Barro-Ricardian equivalence); (3) We could this with public infrastructure investments; (4) The Republican dorks running Congress are trying to cut (1) and (3) while emphasizing more of (2); which is why (5) We need to take fiscal policy out of the hands of these Republican dorks who run Congress.

Monday, October 20, 2014

Pension Funds and Private Equity

There is a fascinating piece by Gretchen Morgenson in today’s New York Times about the large investments public pensions have made in private equity funds.  The focus is on the secrecy of these deals, but the question also comes up as to whether these investments are proper given the fiduciary role that pension fund managers are supposed to play.

One thought that occurs to me is this: pension funds by their nature should position themselves overall toward relatively lower risk portfolios.  Yet pension funds pay a management fee to private equity firms, and then the first 20% or so of investment profits go to private equity as well.  For these fixed costs pension investors receive rights to the residual returns, which may be positive or, as in the case that leads the article, negative.  Present and future pensioners are paying for the opportunity to play a lottery.

It should really be the other way around.  General partners like private equity funds should pay pension funds an initial percent on investment for access to capital along with returns up to some specified level.  The private equity folks, being more risk-loving (in theory) would then grab what’s left.  In this way the risk would be allocated according to levels of fiduciary responsibility.  Why should wealthy speculators load the risk onto working class retirees?