It isn't everyday the Sandwichman gets the opportunity to praise Lawrence H. Summers. Back in February, Summers tweeted, "The inverse of Say's Law holds today: Lack of demand creates lack of potential supply." At a full employment event put on last week by the Center on Budget and Policy Priorities, Summers elaborated on what he meant by that. Starting at about minute 22:20 of the video, Summers eviscerates the lump-of-lobster fallacy.
What is the lump-of-lobster fallacy? Samuel Brittan was wont to invoke the lump-of-labor fallacy in his columns at the Financial Times but occasionally the compositors would tamper with the copy, once rendering the old canard as the lump-of-lobster fallacy. It seems to me that this was an appropriate reductio ad absurdum of the nonsense claim that unspecified persons believed there was "a fixed amount of work to be done."
As the Sandwichman wrote back in January (Yasraeh's Law), "I have described the lump-of-labor fallacy claim as 'an inverted Say's Law on steroids.'"
Monday, April 7, 2014
Does Glenn Hubbard Want to be President Jeb Bush’s Chief Economic Advisor?
I saw on the news this weekend that Republicans are hoping Jeb Bush runs in 2016 – which may in part due to the latest on BridgeGate. Memo to the Republican Party – Chris Christie is not a good candidate. So that’s the good news. The bad news is that I had to endure another rant from Glenn Hubbard , which included:
But structural changes are plainly at work too, based in part on slower-moving demographic factors. A 2012 study by economists at the Federal Reserve Bank of Chicago estimated that about one-quarter of the decline in labor-force participation since the start of the Great Recession can be traced to retirements. Other economists have attributed about half of the drop to the aging of baby boomers. Baby boomers can't be the whole story, though, since the participation rate has declined for younger workers too. This part of the drop is a function of various factors, including simple discouragement, poor work incentives created by public policies, inadequate schooling and training, and a greater propensity to seek disability insurance.Dean Baker does the needed clean-up on this canard:
Hubbard noted the sharp fall in labor force participation since the downturn. He attributed it to a lack of incentive for people to work. This is in striking contrast to the more obvious logic, that when people have been trying unsuccessfully to find jobs for 6 months or a year, they eventually give up ... The problem with Hubbard's story is that he doesn't have a good explanation for why people suddenly decided that they didn't want to work. He points to an increase in the length of unemployment benefits, but this happens in every downturn. Furthermore, the maximum duration of benefits has been cut back sharply from its peak of 99 weeks in the first years of the recession with no corresponding surge in employment. The Affordable Care Act will make it possible for many people to get health care insurance without working or without working full time, but that should only have begun affecting the data in the last few months as the health care exchanges came into existence. It would not explain the drop in labor force participation that was already quite evident by the summer of last year.Dean discusses other reasons why Hubbard’s inward shift of the labor supply curve story does not fit the data. Let me simply add that if it were a lack of labor supply (as opposed to weak aggregate demand) to blame here, then why haven’t real wages increased? Hubbard does note aggregate demand factors:
Does this mean that the Obama administration's "targeted, timely and temporary" stimulus package was the right approach? Actually, no. Increasingly, it appears to have been a poor match for the severity of the downturn and the magnitude of the required boost. After the Great Recession's sharp decline in investment and employment, U.S. business probably needed a more curative jolt to restore confidence. A sustained infrastructure program, rather than a temporary one for "shovel-ready" projects, would have provided more reassurance of longer-term demand. And far-reaching tax reform could have provided both a near-term fillip from front-loaded business tax cuts and a credible prospect for future growth. What we don't know is whether the Obama's administration's activist policies failed to draw more Americans back to work because they were poorly executed or because they didn't do enough to raise aggregate demand. A better designed activist fiscal policy would have made more headway in encouraging growth, but deeper factors behind the downward shift in labor force participation still remain.Why does this remind me of Romney’s 2012 campaign? Let’s be clear – Christina Romer pushed for a better designed fiscal stimulus back in 2009 with more in the way of government investment spending. And her call for a more sensible fiscal stimulus got zero support from Glenn Hubbard’s side in 2009. No – they pushed for low bang for the buck tax cuts without a clue as to how to pay for them in the long-run. I would hope Jeb Bush – if he does decide to seek the Republican nomination – could do better than Mitt Romney did in his 2012 Presidential campaign. And if he were to become the next President, let’s all hope that his economic policies are not as ill thought out as the economic policies enacted by his brother.
Sunday, April 6, 2014
GDP and Well-Being, Positive and Normative
There’s a review in today’s New York Times of Diane Coyle’s new book on the history of GDP calculation. Shot through it is a crazy confusion, abetted—nay demanded—by standard economic practice.
It all goes back to the primordial distinction between positive and normative analysis. Positive analysis is explanatory, predictive, or simply descriptive: what and why. Normative analysis is evaluative: should. We economists beat the heads of our poor charges each year in introductory classes with this distinction. Positive analysis, we say, can be validated by reasoning and evidence, while normative analysis is ineluctably conditional on the values of whoever is doing the evaluating.
Yes and no. The distinction is important, but it is not ironclad. There are lots of ways the two types of analysis are connected, and I won’t get into the philosophical issues here, but it is obvious, just from paying attention, that economics wants to have a single analytical framework to answer both positive and normative questions. Economists don’t want one model to predict what the equilibrium outcome will be and another, using completely different elements and based on different assumptions, to rank that outcome against others according to how beneficial it is. Most models in economics do double-duty: they support positive and normative analysis equally.
So it is with GDP. This is indispensable for the heavy lifting that positive economics, especially macroeconomics, requires. You wouldn’t be able to document whether you were in a boom or a recession without it, or at least not nearly so well. For instance, our NBER judgments of business cycle dating are surely more accurate today than their retrospective judgments of cycles before GDP measurement was established during the New Deal. But GDP is also invoked as a measure of economic “success”—our policies are said to work if they crank up GDP growth or fail if they don’t.
Understandably, GDP has come in for a lot of criticism regarding its measurement of economic well-being. It includes a lot of stuff that doesn’t make us better off (more cops if they’re just a response to an upsurge in crime), leaves out a lot of stuff that does (unpaid labor inside and outside the home), ignores harmful consequences of economic activity (pollution and resource depletion), and utterly fails to price many goods in a way that reflects their actual value to society (such as government-supplied services, which are priced at cost of production). Finally, consumers (such as you and me) do not always spend our income in ways that maximize our well-being, and in some documented cases (e.g. commuting to work) spending can go up while well-being goes down. Personally, I’m convinced: GDP is a deeply flawed indicator for normative purposes.
But what of positive analysis? There I think we’re on much more solid ground. GDP measures the size of the market economy. We happen to live in a market economy, so this is a useful measure. It works well for predicting market consumption, imports, paid employment, that sort of thing. If you think about it, the very characteristics that people criticize from a normative standpoint—how the selection of traded goods and the prices they trade for misrepresent their true impact on us—are the ones that make GDP work for a well-defined set of positive tasks. If we priced things according to their “true” value (supposing we could do that) instead of their market value, we would lose the market part.
Alas, it is sometimes necessary to blur this distinction. For example, we need to have a conception of real GDP so we can tease out the rate of inflation. Since the qualities of goods are constantly changing, they need to be priced in order to distinguish between price increases that contribute to inflation and those that reflect quality improvements. (Or maybe prices are constant but should be seen as contributing to inflation because quality has gone down.) Estimating the value of quality (hedonic regression) brings us closer to the line separating normative from positive. I think the line is not (necessarily) crossed, however, if the (monetary) willingness to pay for quality is kept distinct from the effect of quality on consumer well-being.
And where does that leave us? The distinction between positive and normative analysis is important and needs to be maintained. There should be no presumption that the concepts and models that work for one will work for the other. We should not sacrifice the fit between model and purpose in one realm in order to be able to shoehorn it into the other. I think, though I will not follow it up here, that welfare economics has suffered mightily from attempts to squeeze its analysis into the same models that work well for positive—explanatory and predictive—work.
So let’s not visit the same damage on our properly-functioning positive models, like GDP. Keep and even improve GDP as a measure of the size of monetary flows within an economy, and look elsewhere for appropriate indicators of human well-being. (I have a hunch that economists, who are good at the first task, will prove to be less well-suited to the second.) Do positive well, and do normative well, and don’t let either get in the way of the other.
It all goes back to the primordial distinction between positive and normative analysis. Positive analysis is explanatory, predictive, or simply descriptive: what and why. Normative analysis is evaluative: should. We economists beat the heads of our poor charges each year in introductory classes with this distinction. Positive analysis, we say, can be validated by reasoning and evidence, while normative analysis is ineluctably conditional on the values of whoever is doing the evaluating.
Yes and no. The distinction is important, but it is not ironclad. There are lots of ways the two types of analysis are connected, and I won’t get into the philosophical issues here, but it is obvious, just from paying attention, that economics wants to have a single analytical framework to answer both positive and normative questions. Economists don’t want one model to predict what the equilibrium outcome will be and another, using completely different elements and based on different assumptions, to rank that outcome against others according to how beneficial it is. Most models in economics do double-duty: they support positive and normative analysis equally.
So it is with GDP. This is indispensable for the heavy lifting that positive economics, especially macroeconomics, requires. You wouldn’t be able to document whether you were in a boom or a recession without it, or at least not nearly so well. For instance, our NBER judgments of business cycle dating are surely more accurate today than their retrospective judgments of cycles before GDP measurement was established during the New Deal. But GDP is also invoked as a measure of economic “success”—our policies are said to work if they crank up GDP growth or fail if they don’t.
Understandably, GDP has come in for a lot of criticism regarding its measurement of economic well-being. It includes a lot of stuff that doesn’t make us better off (more cops if they’re just a response to an upsurge in crime), leaves out a lot of stuff that does (unpaid labor inside and outside the home), ignores harmful consequences of economic activity (pollution and resource depletion), and utterly fails to price many goods in a way that reflects their actual value to society (such as government-supplied services, which are priced at cost of production). Finally, consumers (such as you and me) do not always spend our income in ways that maximize our well-being, and in some documented cases (e.g. commuting to work) spending can go up while well-being goes down. Personally, I’m convinced: GDP is a deeply flawed indicator for normative purposes.
But what of positive analysis? There I think we’re on much more solid ground. GDP measures the size of the market economy. We happen to live in a market economy, so this is a useful measure. It works well for predicting market consumption, imports, paid employment, that sort of thing. If you think about it, the very characteristics that people criticize from a normative standpoint—how the selection of traded goods and the prices they trade for misrepresent their true impact on us—are the ones that make GDP work for a well-defined set of positive tasks. If we priced things according to their “true” value (supposing we could do that) instead of their market value, we would lose the market part.
Alas, it is sometimes necessary to blur this distinction. For example, we need to have a conception of real GDP so we can tease out the rate of inflation. Since the qualities of goods are constantly changing, they need to be priced in order to distinguish between price increases that contribute to inflation and those that reflect quality improvements. (Or maybe prices are constant but should be seen as contributing to inflation because quality has gone down.) Estimating the value of quality (hedonic regression) brings us closer to the line separating normative from positive. I think the line is not (necessarily) crossed, however, if the (monetary) willingness to pay for quality is kept distinct from the effect of quality on consumer well-being.
And where does that leave us? The distinction between positive and normative analysis is important and needs to be maintained. There should be no presumption that the concepts and models that work for one will work for the other. We should not sacrifice the fit between model and purpose in one realm in order to be able to shoehorn it into the other. I think, though I will not follow it up here, that welfare economics has suffered mightily from attempts to squeeze its analysis into the same models that work well for positive—explanatory and predictive—work.
So let’s not visit the same damage on our properly-functioning positive models, like GDP. Keep and even improve GDP as a measure of the size of monetary flows within an economy, and look elsewhere for appropriate indicators of human well-being. (I have a hunch that economists, who are good at the first task, will prove to be less well-suited to the second.) Do positive well, and do normative well, and don’t let either get in the way of the other.
The 'Technology' Trap: "'Permanent' Technological Unemployment: 'Demand for Commodities Is Not Demand for Labor'"
Hans P. Neisser, The American Economic Review, Vol. 32, No. 1, Part 1 (Mar., 1942), pp. 50-71:
The theory of technological unemployment is a stepchild of economic science. The facts seem to stand in such blatant contradiction to orthodox doctrine, according to which no "permanent" technological unemployment is possible, that most American textbooks prefer not to mention the problem itself. This attitude is of recent times. The analysis to which Ricardo subjected the displacement of labor by the machine in the last edition of the Principles had stimulated a lively discussion among the later classical economists, who, as we shall see instantaneously, followed two different lines of thought. With the rise of neoclassical equilibrium analysis, the discussion died down, at least in Anglo-Saxon literature' and only recently the oldest argument against technological unemployment, originally developed by McCulloch, was revised in a little more sophisticated form by two American economists, P. H. Douglas and A. Director in The Problem of Unemployment (New York, 1931). We can, therefore, distinguish three approaches:The Technological Unemployment and Structural Unemployment Debates, Gregory Ray Woirol (1996)
1. The "Law of Markets" approach, formulated at first by McCulloch in Principles of Political Economy (first edition, 1825) Part I, chapter VII, and, as pointed out above, revised by Douglas and Director, applies Say's Law of Markets to the Labor Market. As there cannot be a general over-production of commodities produced, so there cannot be a general over-supply of labor. We shall analyze this argument in the first section of this paper, with some supplementary remarks in Section V.
2. McCulloch's argument was not taken up by the other classical authors, because it is at variance with the classical theory of the demand for labor. As John Stuart Mill stated it most pointedly: "Demand for commodities is not demand for labor" (Principles, vol. I, p. 5, para. 9). The maintenance of the demand for commodities according to Say's Law, therefore, does not militate against an over-supply of labor. It is the volume of circulating capital, interpreted as wage fund, that governs the demand for labor. Following Ricardo's lead, the theory of "compensation" of technological displacement of laborers was worked out. In contrast to the Law of Markets approach, which does not allow any exceptions to the denial of "permanent" technological unemployment, the Wage Fund School maintains the occurrence of compensation only as the general rule, exceptions from which are deemed possible though unlikely. In Section III, we shall consider this argument.
3. The neo-classical equilibrium approach differs from the preceding ones by denying the possibility of technological unemployment only as to a state of long-run general equilibrium proper, in which complete adjustment of all the variables of the economic system is attained (size of firm, input, output, prices of goods produced, prices of productive services, interest rate). The difference of the neo-classical approach from the Law of Markets approach is concealed by the use of the terms "temporary" and "permanent" by the latter school. By "permanent," Douglas and Director do not refer to the state of long-run equilibrium proper. This is clear from their definition of "temporary" technological unemployment (op. cit., pp. 113 ff.), which refers only to such obstacles to the reabsorption of laborers as: slow working of competitive mechanism, slow transfer of expenditure from one good to the other, or of workers from one industry to the other. A state of affairs in which these obstacles are overcome (as we shall assume throughout the present paper) still might be in a merely "short-run equilibrium" in the neoclassical sense, which is based on the assumption that all equipment is "given" as to quality and quantity, while long-run equilibrium proper in the neo-classical sense requires, among other things, the adjustment of the size of the firm and of the quality of equipment in such a way that average costs equal price for all firms. Indeed, if the Law of Markets is valid at all, it must be applicable to periods of any length, provided only the period is long enough to overcome the temporary obstacles; and similar considerations apply, as we shall see, to the wage-fund argument.
...
While there is little merit in the two classical approaches, the neoclassical one stands on much firmer ground, on account of its lesser scope. However, even the neo-classical approach is far from giving the unambiguous answer its adherents ascribe to it. This will be shown in Section IV. On the other hand, while the unqualified denial of "permanent" technological unemployment in traditional theory is not justified, preliminary empirical investigations (which cannot be presented in the present paper) have convinced the present writer that popular opinion vastly exaggerates the amount of unemployment which properly could be called "technological." The relative small size of technological unemployment in history is attributable, partly, to the independent forces increasing employment, which briefly will be discussed in the last section. In no case would it be permissible to use simply the current unemployment statistics as a verification or a repudiation of the theories which affirm or deny the existence of technological progress that creates unemployment. Hitherto the discussion has been marred by a confusion of historical and theoretical statements. ... What would one think of an argument against the law of gravity based on the undeniable truth that only a very small number of people habitually fall against the center of the earth with an acceleration of 33 feet per second? And yet, the reference to unanalyzed observation is not worse than the reference to unanalyzed historical facts. In order to obtain a reliable answer to our question, it is necessary to keep constant the other factors as far as they are truly independent, i.e., not exclusively or almost exclusively governed by the volume of technological unemployment itself.
In this mid to late 1940s theoretical literature, a third period of consensus was achieved to add to the consensus periods of 1927—29 (the Say-Douglas purchasing power argument) and 1933—40 (the neoclassical price-flexibility argument). As illustrated by the work of Neisser, Hagen, Lange, Belfer, and Pu, this consensus — built on Keynesian arguments — was that no mechanism existed in the economic system to guarantee the automatic reabsorption of technologically unemployed labor.
This is where the technological unemployment debates ended. With the full employment of World War II and the spread of Keynesian policy ideas about how to achieve full employment, the pessimism about unemployment and productivity trends that had motivated the debates of the 1920s and 1930s disappeared.... In effect by the early 1950s the state of aggregate professional opinion about technological unemployment had returned to the confident views of the late 1920s.
Picketty, Picketty, Picketty
Well, I just got Capital in the 21rst Century and started in on it. It looks exciting, but I confess to being puzzled by the claim that r>g means that inequality grows inexorably. We have the capital share = r (K/Y), and K/Y in the long run equal to s/g (These are Picketty's "fundamental" equations) where g is the sum of population and per capita output growth rates and s is the net saving ratio . Nothing to quarrel with there. But then the capital share in the long run will be (rs)/g. Then if r and g are constant ( and s as well) -- the capital share remains constant whether r>g or r< g. What am I missing?
Krugman had a blog post where he spells out Picketty's argument that a decrease in g will increase r/g and thus the capital share: r will fall by less than g if, as Picketty argues, production is CES and the elasticity of substitution is greater than 1. That makes sense, but this will be so whatever the initial level of r is relative to g, whether above or below unity.
?Help!
Krugman had a blog post where he spells out Picketty's argument that a decrease in g will increase r/g and thus the capital share: r will fall by less than g if, as Picketty argues, production is CES and the elasticity of substitution is greater than 1. That makes sense, but this will be so whatever the initial level of r is relative to g, whether above or below unity.
?Help!
Saturday, April 5, 2014
Return of the Creature from the DeLong Lagoon
Project Syndicate published a version of Brad DeLong's ill-informed anti-Marx mutterings with an odd twist. Where his New York Times commentary had started out "I have long thought that Marx's fixation on the labor theory of value made his technical economic analyses of little worth," the Project Syndicate version attributes the unfounded criticism of Marx to Columbia University assistant professor Suresh Naidu:
The economist Suresh Naidu once remarked to me that there were three big problems with Karl Marx’s economics. First, Marx thought that increased investment and capital accumulation diminished labor’s value to employers and thus diminished workers’ bargaining power. Second, he could not fully grasp that rising real material living standards for the working class might well go hand in hand with a rising rate of exploitation – that is, a smaller income share for labor. And, third, Marx was fixated on the labor-theory of value.Delong has indeed "long thought" that Marx "vanishes into the swamp which is the attempt to reconcile the labor theory of value with economic reality, and never comes out." People who know Suresh Naidu and his work find it extremely unlikely that the views attributed to him by DeLong are accurate. So what's this business of attributing his muddled misconceptions to something Naidu had "once remarked" to him?
Thursday, April 3, 2014
The Creature from the DeLong Lagoon
Professor Brad DeLong:
**"via social reciprocity and negotiation try to keep us all pulling in the same direction"
I have long thought that Marx's fixation on the labor theory of value made his technical economic analyses of little worth. Marx was dead certain for ontological reasons that exchange-value was created by human socially-necessary labor time and by that alone, and that after its creation exchange-value could be transferred and redistributed but never enlarged or diminished. Thus he vanished into the swamp, the dark waters closed over his head, and was never seen again.
Brad forgot to add that Karl Hussein Marx was born in KENYA!
![]() |
Brad DeLong or Karl Marx? |
Just a few pages from Marx's A Contribution to the Critique of Political Economy are enough to show that DeLong's "long thoughts" about Marx must have emerged from a swamp with waters darker than anything even the creature from the black lagoon would deign to wallow in. In a section titled "Historical Notes on the Analysis of Commodities" Marx surveyed a century and a half of thought in classical political economy "beginning with William Petty in Britain and Boisguillebert in France, and ending with Ricardo in Britain and Sismondi in France" that dealt with the concepts of labor time and exchange value and their relationship. Of particular pertinence to refuting DeLong's ontological fantasy is Marx's discussion of the contributions of James Steuart and David Ricardo.
In Marx's account, Steuart was the first to make a "clear differentiation between specifically social labour which manifests itself in exchange value and concrete labour which yields use values..." Furthermore, Steuart was "interested in the difference between bourgeois labour and feudal labour," and consequently shows "that the commodity as the elementary and primary unit of wealth and alienation as the predominant form of appropriation are characteristic only of the bourgeois period of production and that accordingly labour which creates exchange-value is a specifically bourgeois feature [emphasis added]." In other words, the relationship between labour time and exchange value was viewed by Steuart (to Marx's approbation) as historically contingent, not as some ontological certainty, as Delong claims.
Ricardo, according to Marx, "neatly sets forth the determination of the value of commodities by labour time, and demonstrates that this law governs even those bourgeois relations of production which apparently contradict it most decisively." Does this imply that after its creation this exchange value is "never enlarged or diminished," as DeLong asserts? Marx notes the following qualification by Ricardo: "the determination of value by labour-time applies to 'such commodities only as can be increased in quantity by the exertion of human industry, and on the production of which competition operates without restraint.'"
Whatever one thinks of the labour theory of value, DeLong's claims about "Marx's 'fixation'" are so utterly groundless and fantastic as to make one suspect that perhaps Brad mistakenly thought his commentary was scheduled to be published on April 1st. Especially foolish is his account of Marx's alleged beliefs about the impossibility of re-employment of workers displaced by machinery:
Karl Marx in his day could not believe the volume of production could possibly expand enough to re-employ those who lost their jobs as handloom weavers as well-paid machine-minders or carpet-sellers. He was wrong.Obviously DeLong is not aware that Marx devoted a section in Capital to precisely this question, "The theory of compensation as regards the workpeople displaced by machinery," the conclusions of which are more in accord with Keynes's 1934 radio address, "Is the Economic System Self-Adjusting?" than with DeLong's foolish caricature:
The labourers that are thrown out of work in any branch of industry, can no doubt seek for employment in some other branch. If they find it, and thus renew the bond between them and the means of subsistence, this takes place only by the intermediary of a new and additional capital that is seeking investment; not at all by the intermediary of the capital that formerly employed them and was afterwards converted into machinery.Marx reserves his most caustic retort to "the theory of compensation," however, for the first paragraph of the succeeding section:
All political economists of any standing admit that the introduction of new machinery has a baneful effect on the workmen in the old handicrafts and manufactures with which this machinery at first competes. Almost all of them bemoan the slavery of the factory operative. And what is the great trump-card that they play? That machinery, after the horrors of the period of introduction and development have subsided, instead of diminishing, in the long run increases the number of the slaves of labour!Was Marx wrong, yet again? I leave the last word to DeLong who smugly, albeit inadvertently, confirms Marx's prediction to the letter by playing what he imagines is the great trump-card of the worst-case scenario:
The pessimistic view is that some pieces of (3)* will be (a) mind-numbingly boring while (b) stubbornly impervious to artificial intelligence, while (4)** will remain limited and for the most part poorly paid. In that case, our future is one of human beings chained to desks and screens acting as numbed-mind cogs for Amazon Mechanical Turk, forever.*"use our hands, mouths, brains, eyes, and ears to make sure that ongoing processes and procedures stay on track"
**"via social reciprocity and negotiation try to keep us all pulling in the same direction"
Wednesday, April 2, 2014
Rumpelstiltskin!
Rumpelstiltskin turns out to be uniquely relevant to the issue of inequality and technology in that it has to do with spinning, one of the key activities to be mechanized in the period leading up to the "Industrial Revolution." Figuratively speaking, the Spinning Jenny spun straw into gold. It also displaced women from a strategic productive activity. This aspect is discussed in Jane Schneider's "Rumpelstiltskin's bargain: folklore and the merchant capitalist intensification of linen manufacture in early modern Europe" and in Jack Zipes's "Fairy Tale as Myth, Myth as Fairy Tale ." I will leave these fascinating connections aside for the time being to focus only on the denouement of the Grimm Brothers' literary version of the story.
Rumpelstiltskin offers the miller's daughter/queen a way out of her contract to give him her baby if she can guess his name. This riddle element of the story, Zipes points out, is a device to build and hold suspense -- the name itself has no meaning. So how does the queen discover that name? Her servant observes the little creep chanting, "Rumpelstiltskin is my name!" Given that clue, it didn't require advanced study in forensic science or cryptography to figure out what his name was.
Capital's self-disclosure is slightly more subtle than Rumpelstiltskin's – but not much. Instead of prancing around and chanting "the real measure of wealth is disposable time," capital enforces a transparent taboo against any such resolution. When I say "capital" I am referring to a being no less fabulous than Rumpelstiltskin but nevertheless representative of actual social relations. Capital spins straw into gold through an employment system. But for the mechanics of that employment system to remain hidden it must also continually spin golden theory into straw man dogma.
"All models are wrong," wrote George Box, "but some are useful." Note the subjectivity of utility. Some models are "useful" precisely because they are wrong. Others are presumed wrong because they might be useful to the wrong people. In his reply to the New York Times debate question "Was Marx Right?" Brad DeLong wrote, "I have long thought that Marx's fixation on the labor theory of value made his technical economic analyses of little worth." Marx's "fixation" is a figment of Brad's imagination, with a long and disreputable tradition amongst authorities whose familiarity with Marx's writing is second or third hand. On the contrary, Marx was arguing that Capital fixated on labour time as a measure of value and that such a fixation was its "moving contradiction":
Instead of prancing around, chanting "Rumpelstiltskin I am styled," apologists for longer hours bleat and repeat and repeat "Lump-of-labor fallacy!" whenever proposals for reducing work time are put forward. It's a nonsense phrase and that very nonsensicality should be a tipoff to its function as a shibboleth. Technological unemployment? "Lump of labor! Lump of labor!"
Rumpelstiltskin offers the miller's daughter/queen a way out of her contract to give him her baby if she can guess his name. This riddle element of the story, Zipes points out, is a device to build and hold suspense -- the name itself has no meaning. So how does the queen discover that name? Her servant observes the little creep chanting, "Rumpelstiltskin is my name!" Given that clue, it didn't require advanced study in forensic science or cryptography to figure out what his name was.
Capital's self-disclosure is slightly more subtle than Rumpelstiltskin's – but not much. Instead of prancing around and chanting "the real measure of wealth is disposable time," capital enforces a transparent taboo against any such resolution. When I say "capital" I am referring to a being no less fabulous than Rumpelstiltskin but nevertheless representative of actual social relations. Capital spins straw into gold through an employment system. But for the mechanics of that employment system to remain hidden it must also continually spin golden theory into straw man dogma.
"All models are wrong," wrote George Box, "but some are useful." Note the subjectivity of utility. Some models are "useful" precisely because they are wrong. Others are presumed wrong because they might be useful to the wrong people. In his reply to the New York Times debate question "Was Marx Right?" Brad DeLong wrote, "I have long thought that Marx's fixation on the labor theory of value made his technical economic analyses of little worth." Marx's "fixation" is a figment of Brad's imagination, with a long and disreputable tradition amongst authorities whose familiarity with Marx's writing is second or third hand. On the contrary, Marx was arguing that Capital fixated on labour time as a measure of value and that such a fixation was its "moving contradiction":
"Capital itself is the moving contradiction, [in] that it presses to reduce labour time to a minimum, while it posits labour time, on the other side, as sole measure and source of wealth. Hence it diminishes labour time in the necessary form so as to increase it in the superfluous form; hence posits the superfluous in growing measure as a condition – question of life or death – for the necessary. On the one side, then, it calls to life all the powers of science and of nature, as of social combination and of social intercourse, in order to make the creation of wealth independent (relatively) of the labour time employed on it. On the other side, it wants to use labour time as the measuring rod for the giant social forces thereby created, and to confine them within the limits required to maintain the already created value as value." (Grundrisse, "Fragment on Machines", Folks really ought to read the whole thing.)
Marx was writing critically about a fixation. That's different from having a fixation. Was Marx wrong about Capital's "positing labour time as sole measure and source of wealth"? That would be difficult to prove definitively but a long history of strenuous opposition to the reduction of working time is evidence in favour of the proposition.
Remember that shorter working time was a prime objective of labour unions up until at least the middle of the 20th century. It could be argued, I suppose, that resistance -- organized resistance -- from employers was simply irrational but that doesn't say much for the Homo economicus thesis. Nor does it say much for the academic economists who have cranked out facile but disdainful rationales for dismissing shorter work time demands.
Sunday, March 30, 2014
Inequality and Sabotage: Piketty, Veblen and Kalecki (for anne at Economist's View)
One of Thomas Piketty's central concerns in Capital in the 21st Century is the inequality between the rate of return on capital (r) and the growth rate (g), which he expresses as r>g. In a recent opinion piece in the Financial Times, "Save capitalism from the capitalists by taxing wealth," Piketty wrote:
A more realistic proposal may be developed from consideration of the mechanism that underlies the r>g dynamic. Nearly a century ago, Thorstein Veblen offered insights into this mechanism in his The Engineers and the Price System. To Veblen r>g (although he didn't use that term) was a strategy pursued by business, not simply a statistical finding. As Veblen points out, "this is matter of course, and notorious. But it is not a topic on which one prefers to dwell." Accordingly, economists have preferred not to dwell on it. They have pretended it doesn't exist:
One of the most persistent objections to trade unions during the 19th century was that their principal mode of operation was to restrict production. Veblen simply turned this perennial complaint into a question about the 'innocence' of those making all the indignant accusations. Adam Smith had long ago observed famously, "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."
The missing link here, though, is the recognition that the particular efficiencies that the workers withdraw are not the same ones as those that the business firm withdraws. There are different modes of efficiency and those differences result in different effects on the rate of return to capital. In other words, there are r>g efficiencies and there are r<g efficiencies. An example of an r>g efficiency would be a new machine that uses less fuel and less labour to produce a given amount of output. An example of an r<g efficiency would be a reduction in the length of the standard working day that improves worker productivity by reducing fatigue and increasing overall well being. Both are examples of efficiencies but they differ as to whom the benefit of the efficiency gain primarily accrues.
Ironically, business has historically raised its most shrill objections to r<g efficiencies by making the false claim that they are intended as restrictions on production. The distinction between r>g efficiencies and r<g efficiencies also has profound implications for Say's Law (the vulgar version), the Jevons Paradox and Chapman's analysis of the effects of a reduction in the hours of labor, which I discussed in an earlier post.
Even if wage inequality could be brought under control, history tells us of another malign force, which tends to amplify modest inequalities in wealth until they reach extreme levels. This tends to happen when returns accrue to the owners of capital faster than the economy grows, handing capitalists an ever larger share of the spoils, at the expense of the middle and lower classes. It was because the return on capital exceeded economic growth that inequality worsened in the 19th century – and these conditions are likely to be repeated in the 21st.Piketty's proposed (admittedly Utopian) remedy for the current tendency for returns to capital to accrue faster than the economy grows is a global wealth tax, which he describes as "difficult but feasible." One only needs to look at global climate negotiations to be skeptical of that feasibility assessment. There is a global consensus among governments on the need to limit greenhouse gas emissions but they still can't agree on a means for doing so. How likely is it that governments would even agree on the need to limit returns on capital?
A more realistic proposal may be developed from consideration of the mechanism that underlies the r>g dynamic. Nearly a century ago, Thorstein Veblen offered insights into this mechanism in his The Engineers and the Price System. To Veblen r>g (although he didn't use that term) was a strategy pursued by business, not simply a statistical finding. As Veblen points out, "this is matter of course, and notorious. But it is not a topic on which one prefers to dwell." Accordingly, economists have preferred not to dwell on it. They have pretended it doesn't exist:
The mechanical industry of the new order is inordinately productive. So the rate and volume of output have to be regulated with a view to what the traffic will bear — that is to say, what will yield the largest net return in terms of price to the business men who manage the country's industrial system. Otherwise there will be “overproduction,” business depression, and consequent hard times all around. Overproduction means production in excess of what the market will carry off at a sufficiently profitable price. So it appears that the continued prosperity of the country from day to day hangs on a “conscientious withdrawal of efficiency” by the business men who control the country's industrial output. They control it all for their own use, of course, and their own use means always a profitable price. In any community that is organized on the price system, with investment and business enterprise, habitual unemployment of the available industrial plant and workmen, in whole or in part, appears to be the indispensable condition without which tolerable conditions of life cannot be maintained. That is to say, in no such community can the industrial system be allowed to work at full capacity for any appreciable interval of time, on pain of business stagnation and consequent privation for all classes and conditions of men. The requirements of profitable business will not tolerate it. So the rate and volume of output must be adjusted to the needs of the market, not to the working capacity of the available resources, equipment and man power, nor to the community's need of consumable goods. Therefore there must always be a certain variable margin of unemployment of plant and man power. Rate and volume of output can, of course, not be adjusted by exceeding the productive capacity of the industrial system. So it has to be regulated by keeping short of maximum production by more or less as the condition of the market may require. It is always a question of more or less unemployment of plant and man power, and a shrewd moderation in the unemployment of these available resources, a “conscientious withdrawal of efficiency,” therefore, is the beginning of wisdom in all sound workday business enterprise that has to do with industry. [emphasis added]Veblen didn't attribute this strategy of sabotage to evil motives on the part of individual firms, on the contrary it is a imperative for survival:
Should the business men in charge, by any chance aberration, stray from this straight and narrow path of business integrity, and allow the community's needs unduly to influence their management of the community's industry, they would presently find themselves discredited and would probably face insolvency. Their only salvation is a conscientious withdrawal of efficiency.Veblen was referring, as his title indicates, to the effects of the "price system" -- the interaction in the market of supply and demand. The withdrawal of efficiency kept prices at profitable levels by limiting supply. But what about government intervention to ameliorate those effects through a full-employment policy of demand management (a government spending program)? Michal Kalecki's analysis in "The Political Aspects of Full Employment" addressed that prospect:
Clearly, higher output and employment benefit not only workers but entrepreneurs as well, because the latter's profits rise. And the policy of full employment outlined above does not encroach upon profits because it does not involve any additional taxation. The entrepreneurs in the slump are longing for a boom; why do they not gladly accept the synthetic boom which the government is able to offer them?Kalecki outlined three categories of business objection to a full employment by government spending: "(i) dislike of government interference in the problem of employment as such; (ii) dislike of the direction of government spending... (iii) dislike of the social and political changes resulting from the maintenance of full employment." It is the first and third of these objections that have the most direct bearing on the issue of r>g:
Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of 'sound finance' is to make the level of employment dependent on the state of confidence.
...
It is true that profits would be higher under a regime of full employment than they are on the average under laissez-faire, and even the rise in wage rates resulting from the stronger bargaining power of the workers is less likely to reduce profits than to increase prices, and thus adversely affects only the rentier interests. But 'discipline in the factories' and 'political stability' are more appreciated than profits by business leaders. Their class instinct tells them that lasting full employment is unsound from their point of view, and that unemployment is an integral part of the 'normal' capitalist system.For "state of confidence" substitute r>g; for "bargaining power of workers" substitute r<g. Veblen borrowed his term from the subtitle of Elizabeth Gurley Flynn's I.W.W. pamphlet, Sabotage: The Conscious Withdrawal of the Workers' Industrial Efficiency. Flynn's pamphlet was published in 1916 but the idea of workers deliberately restricting output is much older.
One of the most persistent objections to trade unions during the 19th century was that their principal mode of operation was to restrict production. Veblen simply turned this perennial complaint into a question about the 'innocence' of those making all the indignant accusations. Adam Smith had long ago observed famously, "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."
The missing link here, though, is the recognition that the particular efficiencies that the workers withdraw are not the same ones as those that the business firm withdraws. There are different modes of efficiency and those differences result in different effects on the rate of return to capital. In other words, there are r>g efficiencies and there are r<g efficiencies. An example of an r>g efficiency would be a new machine that uses less fuel and less labour to produce a given amount of output. An example of an r<g efficiency would be a reduction in the length of the standard working day that improves worker productivity by reducing fatigue and increasing overall well being. Both are examples of efficiencies but they differ as to whom the benefit of the efficiency gain primarily accrues.
Ironically, business has historically raised its most shrill objections to r<g efficiencies by making the false claim that they are intended as restrictions on production. The distinction between r>g efficiencies and r<g efficiencies also has profound implications for Say's Law (the vulgar version), the Jevons Paradox and Chapman's analysis of the effects of a reduction in the hours of labor, which I discussed in an earlier post.
Saturday, March 29, 2014
Keynes: Cultural Rebel
Paul Krugman’s column today on the psychology of Very Important People, their inability to accept a simple answer to the slump (spend more!) and their urge to pin it on a moral failing like too much public debt and too little skill acquisition, has reminded me of a thought that struck me several years ago when I read Skidelsky’s magnificent three-volume biography of Keynes (especially this and this).
Keynes was an innovative thinker in economics, but he was far more than that. His other great passion in life was his love for art, theater, dance, good food, and finely made things of all description. He liked travel. He liked entertaining and a stimulating conversation. He liked sex. In short, he was an aesthete, a man with intense and finely developed tastes who lived for enjoyment. And ever since his student days, his greatest political commitment was to create a world in which as many others could enjoy these things as possible.
You could pigeonhole him in various ways. This was very British, of course, and characteristic of a tendency that marked the social and political radicals of the upper-middle class during the late nineteenth and early twentieth centuries. Without getting all Ferguson about it, this was also representative of a gay cultural milieu that emerged during the same period (and was linked to the larger cultural radicalism).
In crude terms, it was a backlash against the stifling moralism and (ostensible) self-denial of the Victorians, the Very Serious People of their day. For a VSP, progress depends on investment, which depends on saving, which depends on self-denial. This is what we learn from the Robinson Crusoe parable, the sermons on poverty delivered by Nassau Senior, and popular fiction of Samuel Smiles. If people are enjoying themselves too much, it must be a sign that something is wrong.
As I thought about it, the economic dispute between Keynesians and “classicals” was the tip of a much larger cultural iceberg. Keynes thought that the slump could be surmounted by more spending, and more pleasure, and to his opponents this was not only wrong but monstrous. Surely economic breakdown represents moral breakdown at some level, and the remedy of further borrowing and spending can only deepen this descent into moral failure.
Nearly a century later, has it really changed? Do we see a rational debate over different economic policies based on reasoning (and that includes models) and evidence? Or are deep-seated cultural biases—economics as a Victorian morality tale—still what it’s all about?
Keynes was an innovative thinker in economics, but he was far more than that. His other great passion in life was his love for art, theater, dance, good food, and finely made things of all description. He liked travel. He liked entertaining and a stimulating conversation. He liked sex. In short, he was an aesthete, a man with intense and finely developed tastes who lived for enjoyment. And ever since his student days, his greatest political commitment was to create a world in which as many others could enjoy these things as possible.
You could pigeonhole him in various ways. This was very British, of course, and characteristic of a tendency that marked the social and political radicals of the upper-middle class during the late nineteenth and early twentieth centuries. Without getting all Ferguson about it, this was also representative of a gay cultural milieu that emerged during the same period (and was linked to the larger cultural radicalism).
In crude terms, it was a backlash against the stifling moralism and (ostensible) self-denial of the Victorians, the Very Serious People of their day. For a VSP, progress depends on investment, which depends on saving, which depends on self-denial. This is what we learn from the Robinson Crusoe parable, the sermons on poverty delivered by Nassau Senior, and popular fiction of Samuel Smiles. If people are enjoying themselves too much, it must be a sign that something is wrong.
As I thought about it, the economic dispute between Keynesians and “classicals” was the tip of a much larger cultural iceberg. Keynes thought that the slump could be surmounted by more spending, and more pleasure, and to his opponents this was not only wrong but monstrous. Surely economic breakdown represents moral breakdown at some level, and the remedy of further borrowing and spending can only deepen this descent into moral failure.
Nearly a century later, has it really changed? Do we see a rational debate over different economic policies based on reasoning (and that includes models) and evidence? Or are deep-seated cultural biases—economics as a Victorian morality tale—still what it’s all about?
Friday, March 28, 2014
Back to Econ 101: Costs versus Transfers
Isn’t it interesting how the most rudimentary concepts supposedly covered in the first weeks of an economics principles course keep coming back to us mangled beyond recognition in expert debate?
Take this post by Uwe Reinhardt, as leading a leading authority there is on US health economics. He shows us this rather typical graph portraying the number of Quality Adjusted Life Years (QALY’s, a measure that consolidates all health impacts into a single metric derived from utility theory) people can enjoy as a function of how much they spend to get it. A low-cost medical intervention buys us one year, the next year requires a little more cost and so on until the marginal cost of providing a bit more healthy living zooms up out of sight.
He’s interested in seeing to it we position ourselves at an efficient point along the curve rather than one above it, and he also raises the question of how far along this curve we should be willing to travel:
Just one problem though. Reinhardt’s graphic is about cost, presumably social cost. As we should have learned in our first econ textbook, this refers to the opportunity cost and disutility entailed in producing a good or service. The $1000 charged for a single pill, while it covers resources consumed in R&D and direct production, in all probability has a hefty element of monopoly markup. That’s the whole point of the patent system, after all. And this markup is a transfer, not a cost. We could well be on the flatter, left end of Reinhardt’s cost curve even as we shell out vast sums for this latest medical marvel.
Now I suppose you could argue that such transfers are needed to induce people to take up medical research as a profession, other people to provide their labs, etc. That was apparently the topic of a debate between Reinhardt and Dean Baker, which Reinhardt refers to in his post. But that’s a separate question. It does not overturn the wise judgment of elementary economics on the distinction between a cost and a transfer. It’s amazing that I even need to write this.
Take this post by Uwe Reinhardt, as leading a leading authority there is on US health economics. He shows us this rather typical graph portraying the number of Quality Adjusted Life Years (QALY’s, a measure that consolidates all health impacts into a single metric derived from utility theory) people can enjoy as a function of how much they spend to get it. A low-cost medical intervention buys us one year, the next year requires a little more cost and so on until the marginal cost of providing a bit more healthy living zooms up out of sight.
He’s interested in seeing to it we position ourselves at an efficient point along the curve rather than one above it, and he also raises the question of how far along this curve we should be willing to travel:
Is there a maximum price above which society no longer wishes to purchase added QALYs from its health system, even with the most cost-effective treatments (e.g., Point C)?This issue is crystallized for him a recent report from an insurance advisory panel in California that determined a new hepatitis C drug “low value” compared to older drugs even though it is somewhat more effective—because it is being sold by its producer at $1000 a pill. Here, says Reinhardt, is an example of society, or at least a portion of it with MD’s and financed by the insurance industry, saying enough is enough; we can’t climb that cost curve all the way to the top.
Just one problem though. Reinhardt’s graphic is about cost, presumably social cost. As we should have learned in our first econ textbook, this refers to the opportunity cost and disutility entailed in producing a good or service. The $1000 charged for a single pill, while it covers resources consumed in R&D and direct production, in all probability has a hefty element of monopoly markup. That’s the whole point of the patent system, after all. And this markup is a transfer, not a cost. We could well be on the flatter, left end of Reinhardt’s cost curve even as we shell out vast sums for this latest medical marvel.
Now I suppose you could argue that such transfers are needed to induce people to take up medical research as a profession, other people to provide their labs, etc. That was apparently the topic of a debate between Reinhardt and Dean Baker, which Reinhardt refers to in his post. But that’s a separate question. It does not overturn the wise judgment of elementary economics on the distinction between a cost and a transfer. It’s amazing that I even need to write this.
Thursday, March 27, 2014
"Figure Eight": Another Jevons Paradox
In The Coal Question, William Stanley Jevons argued that "It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth." As Jevons himself pointed out, the same principle was generally recognized with respect to the effects of labour-saving machinery on employment. Classical political economist John Ramsey McCulloch had stated the latter view emphatically in an 1827 Edinburgh Review essay on the progress of the British cotton industry, "There is, in fact, no idea so groundless and absurd, as that which supposes that an increased facility of production can under any circumstances be injurious to the labourers."
The Jevons Paradox remains a complex, ambiguous and controversial idea. In a nutshell, Jevons argued that an improvement in energy efficiency reduced the cost per unit of output, which in turn led to an increase in quantity demanded and ultimately to a rebound of energy consumption beyond what it would otherwise have been in the absence of the efficiency gain. This essay will return to the rebound debate in a later section. First, though, is the matter of introducing another Jevons paradox – one that has crucial bearing on the scientific status of neoclassical economics. Jevons was not the author of this other paradox, rather it's a paradox about the reception of his work.
In chapter five of his Theory of Political Economy, Jevons presented his theory of work effort that relies on a concept of the disutility of excessive toil.
"Despite its strengths," mused Robert Kerton 100 years after its publication, "recent textbooks ignore Jevons' theory while elevating to supremacy indifference curve analysis." In a footnote he added that "A survey of six recent textbooks on labor economics found no mention of Jevons' approach."
In the intervening years, though, one of Alfred Marshall's star pupils, Sydney J. Chapman, added a few wrinkles – and several curves and letters, to be exact -- to Jevons's diagram. Curve L in Chapman's diagram is clearly analogous to Jevons's curve abcd. While the Jevons diagram described the decision faced by an individual worker, Chapman's sought to capture both the immediate and long period effects of changes in working time on both workers and employers in the economy as a whole. He simplified the model by assuming a single industry and workers who had uniform income preferences and endurance.
The Jevons Paradox remains a complex, ambiguous and controversial idea. In a nutshell, Jevons argued that an improvement in energy efficiency reduced the cost per unit of output, which in turn led to an increase in quantity demanded and ultimately to a rebound of energy consumption beyond what it would otherwise have been in the absence of the efficiency gain. This essay will return to the rebound debate in a later section. First, though, is the matter of introducing another Jevons paradox – one that has crucial bearing on the scientific status of neoclassical economics. Jevons was not the author of this other paradox, rather it's a paradox about the reception of his work.
In chapter five of his Theory of Political Economy, Jevons presented his theory of work effort that relies on a concept of the disutility of excessive toil.
"A few hours' work per day may be considered agreeable rather than otherwise; but so soon as the overflowing energy of the body is drained off, it becomes irksome to remain at work. As exhaustion approaches, continued effort becomes more and more intolerable."Following those remarks, Jevons quoted extensively from Richard Jennings, whose statement of "this law of variation" he praised for its clarity. "There can be no question," concluded Jevons, "of the general truth of the above statement," which he then illustrated with a diagram, labeled Figure VIII. "We may imagine the painfulness of labour in proportion to produce to be represented by some such curve as abcd in Fig. VIII.":
Jevons went on to describe how the curve Pq in his diagram represented the diminishing utility to the worker of the wages earned as the duration of work increased. He concluded that there will be "some point m such that qm = dm, that is to say, such that the pleasure gained [from wages] is exactly equal to the labour endured."In this diagram the height of points above the line ox denotes pleasure, and depth below it pain. At the moment of commencing labour it is usually more irksome than when the mind and body are well bent to the work. Thus, at first, the pain is measured by oa. At b there is neither pain nor pleasure. Between b and c an excess of pleasure is represented as due to the exertion itself. But after c the energy begins to be rapidly exhausted, and the resulting pain is shown by the downward tendency of the line cd.
"Despite its strengths," mused Robert Kerton 100 years after its publication, "recent textbooks ignore Jevons' theory while elevating to supremacy indifference curve analysis." In a footnote he added that "A survey of six recent textbooks on labor economics found no mention of Jevons' approach."
Chapman's general conclusion from this analysis was "that progress may be expected to be accompanied by a progressive curtailment of the working day." Along the way, though, he observed that in a competitive market, progressive employers who invested in the future well-being of their workers by voluntarily reducing the hours of work would risk having those workers poached by other employers willing to pay higher wages without having invested in their improved well-being.
As the Sandwichman has pointed out -- repeatedly -- Chapman's analysis of the hours of labour was acknowledged as canonical by such Cambridge and LSE luminaries as Alfred Marshall, E. C. Pigou, J. R. Hicks and Lionel Robbins. Both Pigou and Hicks presented comprehensive summaries of it in The Economics of Welfare and The Theory of Wages, respectively. In fact, Chapman's theory of hours was one of the conceptual pillars of Pigou's analysis of externalities in The Economics of Welfare. So, it seems somewhat strange that Chapman's analysis is virtually forgotten by modern economists.
By "forgotten," I mean economists routinely rely on the assumption that "the given (presumably market-determined) hours of work are optimal" -- usually without even being aware that they making an assumption.
So here is what is so paradoxical: while the formally-labeled Jevons Paradox describes the relentless pursuit of both labour and energy efficiency driving expanded consumption of both through lower unit costs, Jevons's other paradox has to do with the relentless resistance put up against an alternative method of labour efficiency associated with resource conservation. This would not be at all out of place in Thorstein Veblen's analysis in The Engineers and the Price System of the ubiquitous restriction of output imposed by business to obtain higher prices.
As the Sandwichman has pointed out -- repeatedly -- Chapman's analysis of the hours of labour was acknowledged as canonical by such Cambridge and LSE luminaries as Alfred Marshall, E. C. Pigou, J. R. Hicks and Lionel Robbins. Both Pigou and Hicks presented comprehensive summaries of it in The Economics of Welfare and The Theory of Wages, respectively. In fact, Chapman's theory of hours was one of the conceptual pillars of Pigou's analysis of externalities in The Economics of Welfare. So, it seems somewhat strange that Chapman's analysis is virtually forgotten by modern economists.
By "forgotten," I mean economists routinely rely on the assumption that "the given (presumably market-determined) hours of work are optimal" -- usually without even being aware that they making an assumption.
So here is what is so paradoxical: while the formally-labeled Jevons Paradox describes the relentless pursuit of both labour and energy efficiency driving expanded consumption of both through lower unit costs, Jevons's other paradox has to do with the relentless resistance put up against an alternative method of labour efficiency associated with resource conservation. This would not be at all out of place in Thorstein Veblen's analysis in The Engineers and the Price System of the ubiquitous restriction of output imposed by business to obtain higher prices.
Another Reason Why the Realism of Assumptions Matters
Mark Thoma has wisely directed us to a new paper by Stanford’s Paul Pfleiderer that makes the case for distinguishing between models that do and don’t have realistic assumptions. It’s a great read and makes a number of points I would heartily endorse.
Meanwhile, I’d like to add one more argument into the mix. I take it as axiomatic that the economic world is far too complex and variegated to be comprehended or forecasted by any single model. Sometimes one set of factors is paramount, and particular model captures its dynamic, and then another set takes over, and if you continue to follow the first model you’re toast. There are complicated times when you need a bunch of models all at once to make sense of what’s going on, even when they disagree with each another in certain respects.
So how do you know which model to use when? The answer has to be that you observe as carefully as you can the conditions that obtain right then and there to determine which are the most salient, and then you pick the model(s) that are best fitted, by their assumptions, to those conditions. This of course is precisely what me mean by the realism of assumptions: are they reflective of the reality to which they might be applied?
The doctrine that the realism of assumptions doesn’t matter could be defensible only in a world in which it is axiomatic that only a single model, the one that wins the empirical prediction game, will be used in all circumstances to the exclusion of all the rest. That axiom underlies the canonical Friedman formulation in particular. Pfleiderer’s contribution is to show that even this is not enough: given the limited power of real-world tests, some filtering of models according to their assumptions is mandatory.
Meanwhile, I’d like to add one more argument into the mix. I take it as axiomatic that the economic world is far too complex and variegated to be comprehended or forecasted by any single model. Sometimes one set of factors is paramount, and particular model captures its dynamic, and then another set takes over, and if you continue to follow the first model you’re toast. There are complicated times when you need a bunch of models all at once to make sense of what’s going on, even when they disagree with each another in certain respects.
So how do you know which model to use when? The answer has to be that you observe as carefully as you can the conditions that obtain right then and there to determine which are the most salient, and then you pick the model(s) that are best fitted, by their assumptions, to those conditions. This of course is precisely what me mean by the realism of assumptions: are they reflective of the reality to which they might be applied?
The doctrine that the realism of assumptions doesn’t matter could be defensible only in a world in which it is axiomatic that only a single model, the one that wins the empirical prediction game, will be used in all circumstances to the exclusion of all the rest. That axiom underlies the canonical Friedman formulation in particular. Pfleiderer’s contribution is to show that even this is not enough: given the limited power of real-world tests, some filtering of models according to their assumptions is mandatory.
Tuesday, March 25, 2014
Mankiw’s Philosophical Case Against Taxing Capital Income
Isaac Chotiner provided a wonderful rebuttal to something Greg Mankiw write about minimum wages and Obamacare. For me – the takeaway line was:
Notice that Mankiw first claims there is no way to compare people's happiness, and then he goes ahead and...makes a comparison, just in the opposite direction.The reason this is my takeaway line as that Mankiw is at it again with a criticism of something Paul Krugman wrote. Up first – Paul:
most people realize that today’s G.O.P. favors the interests of the rich over those of ordinary families. I suspect, however, that fewer people realize the extent to which the party favors returns on wealth over wages and salaries. And the dominance of income from capital, which can be inherited, over wages — the dominance of wealth over work — is what patrimonial capitalism is all about. To see what I’m talking about, start with actual policies and policy proposals. It’s generally understood that George W. Bush did all he could to cut taxes on the very affluent, that the middle-class cuts he included were essentially political loss leaders. It’s less well understood that the biggest breaks went not to people paid high salaries but to coupon-clippers and heirs to large estates. True, the top tax bracket on earned income fell from 39.6 to 35 percent. But the top rate on dividends fell from 39.6 percent (because they were taxed as ordinary income) to 15 percent — and the estate tax was completely eliminated.Greg’s only rebuttal was to present the literature on the alleged optimality of not taxing capital income. The argument simply put is that not taxing capital income is one way to encourage savings and a higher level of steady state output per capita. Of course, this simple argument known since Frank Ramsey made it in 1927 ignores several things including the fact that current generations must by definition consume less. It also ignores the distributional effects that Paul was stressing. The biggest omission, however, was the fact the Bush’s fiscal policies in general actually reduced national savings through what some would argue was reckless fiscal policy. Of course, Greg Mankiw tried to defend this fiscal stimulus by saying it was necessary to head off a recession – or something like that. But hey - we do know that Bush's fiscal policy made very rich people better off and as Isaac notes, Greg loves rich people.
Monday, March 24, 2014
Has Janet Yellen Given Up On Her 2% Inflation Target?
Or, more precisely, has it become a ceiling rather than a target? Among those who see Janet Yellen's first press conference as Fed Chair and the reports of the first FOMC meeting she chaired as indicating that 2% is now a ceiling to be approached from below rather than a target that could be approached from above are Tim Duy and Dean Baker. I am not sure if it is true, but that would suggest that the initial reaction of the market that interest rate rises may be coming sooner than previously thought may be true, despite the welcome abandonment of a 6.5% unemployment rate trigger point, given the continuing declines in labor force participation by working age persons. If indeed the ceiling argument is correct, what might lie behind this?
First we need to remind ourselves where this 2% inflation target came from and Yellen's crucial role in its initial policy adoption by the Fed and its subsequent spread. Inflation targeting had been going on in various countries since 1990, beginning with New Zealand. But until 1996 it was mostly smaller nations that were doing so and the targets they adopted were rarely 2%. However, in that year her husband, George Akerlof, and two others at Brookings, Dickens and Perry, published a paper that suggested that a 2% inflation rate might balance off the need to control inflation with the need for microeconomic labor market adjustments by changing real relative wages. That a positive rate of inflation was needed for this was due to the empirical ubiquity of downward stickiness of nominal wages. In the face of this the only way to get changes in real relative wages was to have wages for relatively scarce kinds of labor nominally rise. There was in fact no rigorous derivation of a specific rate of inflation that would be optimal for this balancing act, but 2% was essentially thrown out as a possible level that might do the trick. In that year, Janet Yellen convinced Alan Greenspan of this argument in private conversation only revealed later as Fed transcripts became public, with Greenspan shifting from a zero rate of inflation target to the 2% level, even though throughout his Fed chairmanship, there never was any official target level, with this holding even well into the chairmanship of Ben Bernanke, even though it was widely known from not long after 1996 that 2% was effectively the de facto Fed target inflation rate.
Given that decision, this target gradually spread throughout most of the other leading central banks, so that in recent years to varying degrees it has been either the official or de facto target of the Bank of England, the Bank of Japan, the Bank of Canada, and the European Central Bank, among others. It has effectively become the world target inflation rate, and in fact inflation rates in both the US and Canada tracked it fairly closely over quite a few years, whether due to the effects of conscious monetary policy or for other reasons.
However, in recent years the rate has drifted downward in many of these countries, including the US as well as the Eurozone, closer to 1% than 2% in the latter. In practice, the rate has been in the 1-2% range for many of these countries for several years. Clearly, if 2% is the target, this suggests that expansionary monetary policy should be followed, and while indeed this has largely been the case, we have now seen several months where this stimulus has been gradually scaled back in the US with the "taper" policy that is steadily reducing the rate of QE3 expansion of the Fed base, with this expansion targeted to stop by the end of the year. The push to move to a 2% inflation rate seems to be taking a back seat to fears of inflation rising too rapidly, perhaps through asset bubbles.
As it is, while the ECB does not seem to have been pushing too hard to be expansionary, some other central banks have recently been making dramatic efforts to be more expansionary, notably in Japan with Abenomics, however while failing to get to the 2% target. As it is, the 2% target remains in place pretty much in all these countries, even as none seem to be achieving it. What is going on?
One possible explanation is that in effect there has been a loophole to the downward nominal stickiness of wages. It is not that lots of workers are now accepting nominal wage cuts in their current jobs, although more have done so to some extent . Rather what has been going on has been the phenomenon of workers getting laid off and then rehired at other jobs at lower wages. Few in place wages have fallen in nominal terms, but many actual workers have experienced nominal cuts due to being laid off and rehired at lower wages. This may have become the loophole that is making it hard to achieve the 2% target. This is regrettable, but it may be that Yellen and her colleagues have observed this and that it is what lies behind this shift to making the widely publicized 2% target into a ceiling, with that suggestion by Akerlof, Dickens, and Perry now having been effectively revised, if not so loudly in a public way.
Barkley Rosser
First we need to remind ourselves where this 2% inflation target came from and Yellen's crucial role in its initial policy adoption by the Fed and its subsequent spread. Inflation targeting had been going on in various countries since 1990, beginning with New Zealand. But until 1996 it was mostly smaller nations that were doing so and the targets they adopted were rarely 2%. However, in that year her husband, George Akerlof, and two others at Brookings, Dickens and Perry, published a paper that suggested that a 2% inflation rate might balance off the need to control inflation with the need for microeconomic labor market adjustments by changing real relative wages. That a positive rate of inflation was needed for this was due to the empirical ubiquity of downward stickiness of nominal wages. In the face of this the only way to get changes in real relative wages was to have wages for relatively scarce kinds of labor nominally rise. There was in fact no rigorous derivation of a specific rate of inflation that would be optimal for this balancing act, but 2% was essentially thrown out as a possible level that might do the trick. In that year, Janet Yellen convinced Alan Greenspan of this argument in private conversation only revealed later as Fed transcripts became public, with Greenspan shifting from a zero rate of inflation target to the 2% level, even though throughout his Fed chairmanship, there never was any official target level, with this holding even well into the chairmanship of Ben Bernanke, even though it was widely known from not long after 1996 that 2% was effectively the de facto Fed target inflation rate.
Given that decision, this target gradually spread throughout most of the other leading central banks, so that in recent years to varying degrees it has been either the official or de facto target of the Bank of England, the Bank of Japan, the Bank of Canada, and the European Central Bank, among others. It has effectively become the world target inflation rate, and in fact inflation rates in both the US and Canada tracked it fairly closely over quite a few years, whether due to the effects of conscious monetary policy or for other reasons.
However, in recent years the rate has drifted downward in many of these countries, including the US as well as the Eurozone, closer to 1% than 2% in the latter. In practice, the rate has been in the 1-2% range for many of these countries for several years. Clearly, if 2% is the target, this suggests that expansionary monetary policy should be followed, and while indeed this has largely been the case, we have now seen several months where this stimulus has been gradually scaled back in the US with the "taper" policy that is steadily reducing the rate of QE3 expansion of the Fed base, with this expansion targeted to stop by the end of the year. The push to move to a 2% inflation rate seems to be taking a back seat to fears of inflation rising too rapidly, perhaps through asset bubbles.
As it is, while the ECB does not seem to have been pushing too hard to be expansionary, some other central banks have recently been making dramatic efforts to be more expansionary, notably in Japan with Abenomics, however while failing to get to the 2% target. As it is, the 2% target remains in place pretty much in all these countries, even as none seem to be achieving it. What is going on?
One possible explanation is that in effect there has been a loophole to the downward nominal stickiness of wages. It is not that lots of workers are now accepting nominal wage cuts in their current jobs, although more have done so to some extent . Rather what has been going on has been the phenomenon of workers getting laid off and then rehired at other jobs at lower wages. Few in place wages have fallen in nominal terms, but many actual workers have experienced nominal cuts due to being laid off and rehired at lower wages. This may have become the loophole that is making it hard to achieve the 2% target. This is regrettable, but it may be that Yellen and her colleagues have observed this and that it is what lies behind this shift to making the widely publicized 2% target into a ceiling, with that suggestion by Akerlof, Dickens, and Perry now having been effectively revised, if not so loudly in a public way.
Barkley Rosser
Subscribe to:
Posts (Atom)