Friday, May 9, 2014

Scoundrel Time

Twenty years ago, the Journal of Economic Issues published a note by Warren Samuels on "'Shirking' and 'Business Sabotage'." Citing Thorstein Veblen's analysis of business sabotage, Samuels was scathing in his criticism of the ideological double standard of the mainstream efficiency wage literature:
Pejorative emphasis on shirking merely, but effectively, constitutes either taking the employer side or assuming that there is nothing further to be worked out, which in practice is typically simply inaccurate. Indeed, even talking about management's failure to solve the principal-agent problem privileges the employer position. Mainstream theory is asymmetrical.
And:
The analyst who opposes worker shirking without criticism of industrial sabotage is taking sides, and the analyst who opposes industrial sabotage without criticism of shirking is also taking sides.
It's worth delving into just how pejorative shirk is. Oxford defines it as "to avoid meanly, to shrink selfishly from duty... Slink or sneak away, practice fraud or trickery..." Etymologically, the word is suspected of coming from the German Schurke, a scoundrel. Well, there's your value-free positive, eschewing the normative, neoclassical economics for you!

Samuels's criticism was resoundingly ignored by economists theorizing about shirking. A Google Scholar search turns up ten citations, four of them by David Spencer. By contrast there are 4575 results for a search on the canonical source by Carl Shapiro and Joseph Stiglitz, "Equilibrium Unemployment as a Worker Discipline Device."

Searching inside the search results for Shapiro and Stiglitz gives further insight into the asymmetry of mainstream theory. Using ten phrases such as "shirking workers" and "employee who shirks" and a like number of complementary phrases, "shirking employers" and "firms who shirks" returns totals of 588 and 3, respectively, after eliminating the false positives for the latter such as "…to prevent shirking, employers…"

Interestingly enough, one of the three dissident results that turns up is from a fireside chat from July 1933 by Franklin Delano Roosevelt, explaining the New Deal's Industrial Recovery Act:
The proposition is simply this: 
If all employers will act together to shorten hours and raise wages we can put people back to work. No employer will suffer, because the relative level of competitive cost will advance by the same amount for all. But if any considerable group should lag or shirk, this great opportunity will pass us by and we will go into another desperate winter. This must not happen.
...
It will be clear to you, as it is to me, that while the shirking employer may undersell his competitor, the saving he thus makes is made at the expense of his country’s welfare... 

May 8 And May 9

The Washington Post reports (5/8/14) that on the 69th anniversary of the end of WW II in Europe, the granddaughter,  Susan Eisenhower, of the commander of the Normandy invasion, Susan Eisenhower, spoke at the WW II memorial in Washington, reciting her grandfather's "Victory Order of the Day," and I shall I quote the opening passages, "Men and women of the Allied Expeditionary Force,...The crusade on which we embarked in the early summer of 1944 has reached its glorious conclusion...Full victory in Europe has been attained."

Today in Moscow, May  9, there will be as there has been for 69 years, a celebration of the victory of the Stalin-led Soviet victory over Hitler's Germany, when at the first celebration the flags of Hitler and his allies were repudiated and massively publicly reviled.

WaPo reported Susan Eisenhower's speech at DC's pathetic WW II memorial (no disrespect to Bob Dole, whose one functioning hand I once shook, and he is responsible for the fact that even this pathetic joke of a memorial exists in Washington), was reported on p.2 as a barely there story.

So, I do not yet have what the ceremony will be in Moscow, but for this one , a day later than the US one, there will be a total national blowout in celebration. The simple explanation for this is that US deaths were a bit under a million, while those in what used to be the USSR were over 20 million.

This celebration invokes the Russian-Ukrainian conflict, where both on the surface agree in celebration, but many, particularly in parts of eastern Ukraine, see it more precisely as a statement of their separate existence from Russia (or the former USSR).

Fortunatel,y the latest reports are that Putin is pulling back troops from the border.  He is discouraging the barely functional separatist referendum in the eastern Ukaine, which was by last report only ready to go in maybe three cities.  Putin knows this is not enough and on this great anniversary he does not want to start another world war,

Barkley Rosser

Wednesday, May 7, 2014

Regression Analysis and the Tyranny of Average Effects

What follows is a summary of a mini-lecture I gave to my statistics students this morning.  (I apologize for the unwillingness of Blogger to give me subscripts.)

You may feel a gnawing discomfort with the way economists use statistical techniques.  Ostensibly they focus on the difference between people, countries or whatever the units of observation happen to be, but they nevertheless seem to treat the population of cases as interchangeable—as homogenous on some fundamental level.  As if people were replicants.

You are right, and this brief talk is about why and how you’re right, and what this implies for the questions people bring to statistical analysis and the methods they use.

Our point of departure will be a simple multiple regression model of the form

y = β0 + β1 x1 + β2 x2 + .... + ε

where y is an outcome variable, x1 is an explanatory variable of interest, the other x’s are control variables, the β’s are coefficients on these variables (or a constant term, in the case of β0), and ε is a vector of residuals.  We could apply the same analysis to more complex functional forms, and we would see the same things, so let’s stay simple.

What question does this model answer?  It tells us the average effect that variations in x1 have on the outcome y, controlling for the effects of other explanatory variables.  Repeat: it’s the average effect of x1 on y.

This model is applied to a sample of observations.  What is assumed to be the same for these observations?  (1) The outcome variable y is meaningful for all of them.  (2) The list of potential explanatory factors, the x’s, is the same for all.  (3) The effects these factors have on the outcome, the β’s, are the same for all.  (4) The proper functional form that best explains the outcome is the same for all.  In these four respects all units of observation are regarded as essentially the same.

Now what is permitted to differ across these observations?  Simply the values of the x’s and therefore the values of y and ε.  That’s it.

Thus measures of the difference between individual people or other objects of study are purchased at the cost of immense assumptions of sameness.  It is these assumptions that both reflect and justify the search for average effects.

Well, this is a bit harsh.  In practice, one can relax these assumptions a bit.  The main way this is done is by interacting your x’s.  If x1 is years of education and x2 is gender (with male = 1), the variable x1 x2 tells us that education is regarded as a factor if the observation is male, otherwise not.  In this way the list of x’s and their associated β’s can be different for different subgroups.  That’s a step in the right direction, but one can go further.

So what other methods are there that make fewer assumptions about the homogeneity of our study samples?  The simplest is partitioning subsamples.  Look at men and women, different racial groups or surplus and deficit countries separately.  Rather than search for an average effect for all observations, allow the effects to be different for different groups.

Interacting variables comes close to this if you interact group affiliation with every other explanatory variable.  It doesn't go all the way, however, because (1) it still requires the same outcome variable for each subgroup and (2) it imposes the same structural form.  Running separate models on subsamples gives you the freedom to vary everything.

When should you evaluate subsamples?  Whenever you can.  It is much better than just assuming that all factors, effects, and sensible regression choices are the same for everyone.

A different approach is multilevel modeling.  Here you accept the assumption that y, the x’s and structural methods are the same for everyone, but you permit the β’s to be different for different groups.  Compared to flat-out sample partition, this forces much more homogeneity on your model, but in return you get to analyze the factors that cause these β’s to vary.  It is a way to get more insight into the diversity of effects you see in the world.

Third, you could get really radical and put aside the regression format altogether.  Consider principal components analysis, whose purpose is not hypothesis testing (measurement of effects, average or not), but the structure of diversity within your sample population.  What PCA does, roughly, is to find a cluster of correlations that appear among the variables you specify, making no distinction between explanatory and outcome variables.  That gives you a principal component, understood as subpopulation with distinctive characteristics.  Then the procedure analyzes the remaining variation not accounted for in the first set of correlations; it comes up with a second cluster which describes a second subgroup with its own set of attributes.  It does this again and again until you stop, although, in social science data, you rarely get more than three significant principal components, and perhaps less than this.  PCA is all about identifying the “tribes” in your data sets—what makes them internally similar and externally different.

In the end, statistical analysis is about imposing a common structure on observations in order to understand differentiation.  Any structure requires assuming some kinds of sameness, but some approaches make much more sweeping assumptions than others.  An unfortunate symbiosis has arisen in economics between statistical methods that excessively rule out diversity and statistical questions that center on average (non-diverse) effects.  This is damaging in many contexts, including hypothesis testing, program evaluation, forecasting—you name it.

I will mention just one example from my own previous work.  There is a large empirical literature on whether and to what extent workers receive compensating differentials for dangerous work, a.k.a. hazard pay.  In nearly every instance the researcher wants to find “the” coefficient on risk in a wage regression.  But why assume such a thing?  Surely some workers receive ample, fully compensating hazard pay.  Some receive nothing.  Some, even if you control for everything you might throw in, have both lower wages and more dangerous jobs, because there is an irreducible element of luck in the labor market.  Surely a serious look at the issue would try to understand the variation in hazard pay: who gets it, who doesn't, and why.  But whole careers have been built on not doing this and assuming, instead, that the driving purpose is to isolate a single average effect, “the” willingness to pay for a unit of safety as a percent of the worker’s wage.  It’s beyond woozy; it’s completely wrongheaded.

The first step toward recovery is admitting you have a problem.  Every statistical analyst should come clean about what assumptions of homogeneity are being made, in light of their plausibility and the opportunities that exist for relaxing them.

UPDATE: I fixed a couple of bloopers in the original post (an inappropriate reference to IV and a misspelling of PCA).

On Piketty "Not Reading Capital"

Did he or didn't he?:
Like his predecessors, Marx totally neglected the possibility of durable technological progress and steadily increasing productivity, which is a force that can to some extent serve as a counterweight to the process of accumulation and concentration of private capital.  

My conclusions are less apocalyptic than those implied by Marx’s principle of infinite accumulation and perpetual divergence (since Marx’s theory implicitly relies on a strict assumption of zero productivity growth over the long run).

For Marx, the central mechanism by which “the bourgeoisie digs its own grave” corresponded to what I referred to in the Introduction as “the principle of infinite accumulation”: capitalists accumulate ever increasing quantities of capital, which ultimately leads inexorably to a falling rate of profit (i.e., return on capital) and eventually to their own downfall. Marx did not use mathematical models, and his prose was not always limpid, so it is difficult to be sure what he had in mind. But one logically consistent way of interpreting his thought is to consider the dynamic law β = s / g in the special case where the growth rate g is zero or very close to zero. 

In Marx’s mind, as in the minds of all nineteenth- and early twentieth-century economists before Robert Solow did his work on growth in the 1950s, the very idea of structural growth, driven by permanent and durable growth of productivity, was not clearly identified or formulated.... Today we know that long-term structural growth is possible only because of productivity growth. But this was not obvious in Marx’s time, owing to lack of historical perspective and good data.

That Marx actually had a model of this kind in mind (i.e., a model based on infinite accumulation of capital) is confirmed by his use on several occasions of the account books of industrial firms with very high capital intensities. In volume 1 of Capital, for instance, he uses the books of a textile factory, which were conveyed to him, he says, “by the owner,” 

Marx was also an assiduous reader of British parliamentary reports from the period 1820–1860. He used these reports to document the misery of wage workers, workplace accidents, deplorable health conditions, and more generally the rapacity of the owners of industrial capital. He also used statistics derived from taxes imposed on profits from different sources, which showed a very rapid increase of industrial profits in Britain during the 1840s. Marx even tried—in a very impressionistic fashion, to be sure—to make use of probate statistics in order to show that the largest British fortunes had increased dramatically since the Napoleonic wars.  
The problem is that despite these important intuitions, Marx usually adopted a fairly anecdotal and unsystematic approach to the available statistics. 

Marx seems to have missed entirely the work on national accounting that was developing around him, and this is all the more unfortunate in that it would have enabled him to some extent to confirm his intuitions concerning the vast accumulation of private capital in this period and above all to clarify his explanatory model. 

No doubt Marx’s literary talent partially accounts for his immense influence.

In Chapter 6 I return to the theme of Marx’s use of statistics. To summarize: he occasionally sought to make use of the best available statistics of the day (which were better than the statistics available to Malthus and Ricardo but still quite rudimentary), but he usually did so in a rather impressionistic way and without always establishing a clear connection to his theoretical argument.

Tuesday, May 6, 2014

Gary Becker: Able To Disagree Without Being Disagreeable

This past weekend I toasted Ed Nell at his retirement function at the New School on being "a member of a vanishing species, a true gentleman and scholar of the Old School."  The death of Gary Becker reduces by one the size of that species, given that while Ed is now retired, he is still alive and hopefully still active, but Becker is no longer among us.

In making that toast I noted that many now view such a label as ironic or even silly, and few under the age of 40, or maybe even 50, would take it seriously.  "Scholar" is one thing and still generally OK when appropriate (clearly so in both of their cases), but "gentleman" is a much more questionable term, with many essentially instantly asuming that its use implies that the person in question is probably a classist or sexist or some other undesirable "ist," and I would probably agree that anybody running around claimng loudly to be one is likely to also be one of these not so admirable "ists."  But, with that caveat of being "of the Old School," this means that the person in question is not one of those "ists."  They respect others and are polite and friendly to others, even when they disagree with those others.  Indeed, the mark of this is being able to disagree without being disagreeable, something that applied to both of these Old School gentleman-scholars.

I did not know Gary Becker at all well.  However, I did have a number of professional interactions with him over the years.  Most of these involved in some way my editing journals that have behavioral economics as a main theme of what they publish.  Justin Wolfers has just posted a claim that Becker was really the frist behavioral economist, even while admitting that he would not have liked to have labeled himself as such.  In general, Becker has been viewed by most behavioral economists as "The Enemy," probably the most important and influential scholar advocating a strongly rationalistic approach to economics, even as he took a broad view of what might enter into a person's preferences, which might include altruism and concern for others.  In any case, in all my personal and professional dealings with Gary Becker, he was always the utmost gentleman and scholar, able to disagree without being in the least disagreeable, a model gentleman-scholar.

Without doubt, however, Becker introduced into sociology, law, and several other disciplines an approach from economics that emphasized analysis based on a rational agent approach, and the influence of this will continue, and these models certainly serve as benchmarks, even when they are not fully correct.  He has certainly been the most important figure since WW II, indeed, possibly in the entire history of economics, to have spread this view, even as he took a moderate view of what constitutes what it is that agents prefer or are seeking to maximze in a possible utility function.  I have also heard that he, along with several others of his colleagues at Chicago, were unhappy and dismissive when they received word that the founder of behavioral economics, the late Herbert Simon, had received a Nobel Prize in 1978, although I cannot verify that for certain.  Regarding these reported attitudes, I respectfully disagree.

This may not be very proper, but I am going to poke at his broader perspective, not on ideological grounds as many reading this might, but on substantive grounds, while keeping in mind that he always was indeed the perfeect gentleman-scholar (of the Old School).  So, Tyler Cowen at Marginal Revolution linked to a 1962 paper by Becker in the JPE, "Irrational behavior and economic theory," which Tyler described as showing that the theorems of economic theory hold even in the face of irrational behavior.  I must report that this does not appear to be the case, and that this paper is much weaker in its arguments than I expected, although I suspect Becker would have provided more sophisticated arguments in more recent years.

So, the paper follows strongly on the "survivalist" arguments of Alchian and Friedman, that individuals and firms may not know what they are doing or be engaging in conscious optimization, but that those who survive the best will be those who are in fact coming closest to optimizing.  However, Becker pushes the argument further.  He identifies "rational behavior" as implying "downward-sloping demand curves," and while later in the paper he refers to this as a "tendency," early in the paper he simply asserts it to be true, no matter what.  He does not even recognize the theoretical possibility of Giffen goods, which have since been shown fairly strongly to exist in at least some cases (rice in China for one).  I suspect he was listening to George Stigler, who strongly asserted that there were no empirical Giffen goods, and in any case, Becker does not offer any possible exceptions in particular, although  using "tend" in places later in the paper.  In any case, Becker accepted that individuals might "behave irrationally," with impulse buying his main example, but then argued that they, and especially aggregated markets, and also firms, would nevertheless face downward-sloping demand curves due to budget constraints. Sorry, but no dice.

I note an even more striking possible exception to his claims about downward-sloping demand curves and indeed how these relate back to the fundamental argument about rationality.  I am thinking about speculative bubbles.  Now, at the time Becker wrote, he was almost certainly under the "survivalist" influence of Milton Friedman who had not too long previously dismissed the idea that speculative bubbles might lead to instability in foreign exchange markets on the grounds that speculators would lose money and be driven out of the market.  They would not survive because they would stupidly buy high and sell low.  However, we have since learned from DeLong et al that in fact "noise traders" can not only survive but can even be the best performers in a market.  Ooops.

And in fact we have seen lots of markets in recent years since Becker's 62 paper that certainly look like bubbles, with the dotcom stock market one and housing markets in many nations more.  On the surface, these phenomena look like violations of "the law of demand."  Prices rise and people buy more of whatever it is, and vice versa, selling (or buying less) when price falls.  We see lots of this out there.  This is not all that uncommon.  So, the usual explanation in standard views is that the demand curve is shifting outwards, although still downward sloping.  It is shifting because ceteris is not paribus, and in particular, expectations are changing.  Now, there are some special cases where such shifting expectations mght be rational, but the vast majority of evidence suggests that when we see this, we are not seeing rational behavior but what Minsky and Kindleberger would call a "mania."  These are cases where irrational behavior does not result in clearly downward-sloping demand curves.  I deeply respect Becker's scholarship and intellect, but on this one he was misguided.

BTW, I cannot resist closing on a personal note.  Many years ago, indeed, decades ago, I submitted a paper to a journal about speculative bubble dynamics.  The paper was rejected with a referee declaring that if bubbles existed that would mean that Giffen goods existed, and George Stigler had shown that they do not.  End of report and basis of paper rejection. And, indeed, as I suggested above, I think Stigler misled Becker on that matter back in those days as well, although Becker may well  have changed his mind on these matters in more recent years.

Let me close by noting one more positive aspect of Becker and his intellect. While I disagree (as noted above) with many things he argued, I also recognize that he was consistent in his views.  He was not a simple ideologue or party hack, and supported things that many who admire him did not but that were consistent with his broader philosophy.  He was indeed, whatever one thinks or thought of his arguments or positions, a genuine gentleman and scholar of the Old School.

Barkley Rosser

Becker and Marx

On Andrew Sullivan's Daily Dish, Justin Wolfers is quoted comparing Becker to Marx :

"no economist since Marx has had such a profound impact across the social sciences, transforming not just economics, but also sociology, political science, criminology, demography and legal scholarship"


I want to compare them in another respect. Both  championed different forms of Rabid Economism. Marx's economism was holist,  Becker's individualist, but both forms are equally reductionist and equally  imbecilic. Marx's materialism reduces the cultural, the political, the ethical to super-structural epiphenomena: all were just distorted reflections of the underlying reality of class struggle. Becker thinks all human agency simply consists of maximizing utility. For neither thinker do human beings have the ability to think and act  "for the sake of the world," as Hannah Arendt would say. For each, we are deluding ourselves if we think that acting can ever be a matter of  trying to get things right - to do what is called for, to believe what is warranted -independent of what our interests dictate. For both, in other words, the concept of disinterested action - including the disinterested pursuit of truth - is a snare and a delusion.  Finally, in this latter respect, both systems of thought are self-undermining:   neither can make sense of  itself as a disinterested attempt to understand the human condition.


(I owe my appreciation of this parallel to Deirdre's McCloskey hilarious charcterization of Stigler as "the last vulgar Marxist.")

Monday, May 5, 2014

Funding for the Veterans Health Administration

Greg Mankiw tries to revive the case of a privatized VHA by citing an extremely unfortunate issue with respect to how the Phoenix VHA rationed services:
Maybe privatization would solve the problem. If veterans had vouchers that they could take to competing healthcare providers, they would likely not have had to wait as long.
Isn’t the real issue better discussed here?
The Department of Veterans Affairs spends more today in inflation adjusted dollars than it did after World War II and the Vietnam War, when millions of troops returned from the battlefield, according to federal budget figures … Two factors more than any others have driven health care costs higher at the Dayton VA Medical Center, officials said. Aging Vietnam veterans who have more health needs as they grow older, and the return home of thousands of veterans from the battlegrounds of Iraq and Afghanistan. "It's the number of veterans returning from the war, but it's also the conditions they are returning with," said Dr. William J. Germann, Dayton VA chief of primary care service and a retired Air Force brigadier general. "There are a number of veterans coming back dysfunctional and as a result may not be able to hold a job."
I have to wonder whether the economic advisors to the President who made that horrifically expensive decision to invade Iraq in 2003 advised this President that William Kristol’s forecast was way off base. As you may recall – Kristol said the cost of invading Iraq would be less than 0.2% of GDP. I also have to wonder why these economic advisors thought we could continue to lower tax rates when any sensible person would have known that the future costs of the Iraq War to the nation would be enormous. While it is true that VHA funding has risen sharply – it has not increased by a sufficient amount to keep pace with its growing demands. Mankiw’s faith in market place solutions to this growing demand would put more of the budgetary costs on the back of the returning soldiers.

Friday, May 2, 2014

The Unemployment Rate Fell for the Wrong Reason

BLS reported:
Total nonfarm payroll employment rose by 288,000, and the unemployment rate fell by 0.4 percentage point to 6.3 percent in April, the U.S. Bureau of Labor Statistics reported today. Employment gains were widespread, led by job growth in professional and business services, retail trade, food services and drinking places, and construction.
Not a bad increase in employment per the payroll survey but when one looks at the household survey, reported employment fell. The employment to population rate stayed at 58.9%. So why did the unemployment rate drop so much? The labor force participation rate fell from 63.2% to 62.8% as noted later by the BLS:
The civilian labor force dropped by 806,000 in April, following an increase of 503,000 in March. The labor force participation rate fell by 0.4 percentage point to 62.8 percent in April. The participation rate has shown no clear trend in recent months and currently is the same as it was this past October. The employment-population ratio showed no change over the month (58.9 percent) and has changed little over the year.
We are far from full employment and we are not closing the gap quickly enough.

Paul Krugman Really Blows It

I still do not have a copy of Piketty's book, but watching the ongoing debates it has triggered has been quite fascinating, with the recent subtext of Paul Krugman (and a few others, notably Simon Wren-Lewis) arguing with various heterodox economists over it, particularly Jamie Galbraith and most recently Tom Palley, standing out quite noticeably.  I was frankly not all that impressed by Palley's latest bit on flim-flam, but in attempting to rebut it, Paul Krugman has just up and blown it in such a massive and embarrassing way I simply cannot resist commenting on it.

So, he comments on the "hangups of the heterodox," which I just tried to link to, but it did not work (sorry about that, but Mark Thoma will probably link to it).  Anyway, Krugman says a number of not unreasonable things, such as noting that he has noted that when an economy is in a liquidity trap, flexible prices may be destabilizing.  But then at the end, when he attempts to deliver what he obviously considers to be the ultimate coup de grace, he does it, not only blows it, but falls flat on his face in a massive error.  I quote his final paragraph:

"And what's going on here, I think, is a fairly desperate attempt to claim that the Great Recession and its unfortunate aftermath somehow prove that Joan Robinson and Nicholas Kaldor were right in the Cambridge controversies of the 1960s.  It's a huge non sequitur, even if you think they were indeed (which you shouldn't.)  But that's what seems to be happening."

OK, I do not think that is what Galbraith or Palley are arguing, although Jamie in particular is making a case that Piketty wrongly ignores the capital theory debates, and Krugman has been whonking on about how the marginal productivity theory of income distribution is a "good first pass" on at least the factor income part of it.  But no, that is not where Krugman falls down.

It is in his parenthetical aside, that "you shouldn't" think that Robinson and Kaldor (who was not one of the main participants in that debate from the Cambridge, UK side, and I say that as one who has recently been labeled a "Kaldorian") were right in the debate.  The problem is that Paul Samuelson agreed that in fact Robinson and Piero Sraffa (and Garegnani and Pasinetti) were in fact right.  The possibility of reswitching does undermine profoundly the marginal productivity theory of factor income distribution, especially for capital.  He did so in his "Summing Up" paper after the symposium on reswitching in the QJE in 1966.  His final sentence of that paper, after going through the arguments in several papers was "The foundations of economic theory are built on sand."

Krugman makes a lot of good points, but he really needs to get it together about what went down during the Cambridge capital theory controversies.  Robinson and Sraffa were right, and Paul Samuelson said so.  Period.

Barkley Rosser

Thursday, May 1, 2014

WaPo: From Luhansk To Lugansk And Back Again

I guess few here are interessted in this, but I cannot resist noting the weird gyrations of spelling of urban names in Ukraine going on at the Washington Post.  I note first that they consistently use the Russian-favored spelling of the national capital, Kiev (rather than Kyiv), while they consistently use the Ukrainian-favored spelling for the name of the second largest city in the nation (is it still a nation?), Kharkiv (rather than Kharkov).  But they are gyrating on this other city near the Russian border, east of Donetsk, with its major public buildings under the control of pro-Russian separatists, without any other cities in its oblast under such control.

So, prior to the recent uprsisings, WaPo had been using the Ukrainian-preferred spelling: Luhansk.  Then fairly recently as I noted in a previous post they suddenly switched to the Russian-preferred spelling: Lugansk.  Yesterday in their main article on the region, they were openly schizophrenic, with a photograph labeled using the Russian-preferred spelling, with the text having reverted to the Ukrainian-preferred spelling.  Today, there were several articles, and all have reverted to Luhansk, the Ukrainian-preferred spelling.  I do not know who is making these decision or on what grounds, but I confess that I feel their pain.

I Finally Can’t Take Naomi Klein Any More

She seems like a nice person and she wants the best for everyone, but her writings have become so counterproductive, so utterly wrong and yet so influential within the activist community, that it’s time to stand up.  (Or in my case, sit down at the computer.)

I just read her latest screed in The Nation, “The Change Within: The Obstacles We Face Are Not Just External”.  For the record, not everything in it is nonsense, but the most important parts are.

I’ll start with the less significant gripe.  Klein writes that we are unprepared to deal with climate change because “Climate change is place-based, and we are everywhere at once.”  We move around too much.  We are too global.  We've got to stay in one place for long periods and get to know the local flora and fauna, ID that flower that’s blooming a week earlier this year.  The reason we don’t understand climate change is that we've become too cosmopolitan.

Well, count me as a rootless cosmopolitan.  I like where I’m living, but I like most of the places I used to live too.  I don’t have any particular attachment to local places, local people or local thinking.  Or to put it differently, I appreciate all the local places, people and cultures, and I don’t place my current abode above them.

And guess what?  While many of its effects are local, climate change is the mother of all global problems.  The carbon cycle is planetary, and its impacts are driven by processes that span the planet too.  Above all, the solutions have to be global in scope.  Living your own particular, pristine local lifestyle is not the answer, folks.  We need global collective action, and that’s going to come from people who adopt a global perspective and feel comradery with other people who speak different languages and live thousands of miles away.  Klein has it ever so wrong about localism.  Yes, love the local wonders wherever you may be, but try to summon within yourself an intellectual and emotional frame that’s broader than anything we've ever seen before.  That’s what we’ll need.

But the real whopper is what she leads with: “Climate change demands that we consume less....”  How can I express how angry I feel when I read this?  Yes, it is ignorant and appeals to the prejudices of her tribe (the nouveau righteous), but it is deeply, deeply hostile to human solidarity.  Oh, and it has as much political potential as a suicide cult.

Folks, we’re still not out of the global meltdown that hit us in 2008.  There is massive unemployment throughout the industrialized world, much of it unmeasured because workers have dropped out of the labor force.  Outside our charmed circle, such as it is, literally billions of people lack the basics for health, security and the pursuit of their dreams.  So it is true, a large portion of the world’s people are consuming less than they’d like, and here we have Klein cheering it on.

Let me get personal.  My college is going through another of its periodic spasms of budget cuts.  We've laid off faculty lines and are looking to slash expenses anywhere we can, despite the fact that we've been making these kinds of cuts for years and never restoring them.  I will grant that much of this can be laid to our own failings, but we are part of a larger story, the long-term defunding of public higher education in a country whose progress in that area has come to a standstill.  I can assure Klein that this will lead to less consumption: unemployed faculty will consume less, students priced out of higher education will consume less of this product, and people who supply goods and services to our institution will have to take a hit as well.  And you know what?  This will do nothing at all to stop the climate juggernaut.  (One of the positions we cut was for a faculty member whose specialty is “climate justice”.  What do you think of that?)

There are two massive holes in Klein’s argument that rival any open pit mine you might stumble upon.  First, what do you mean “we” when you say “we must consume less”?  Aha, you didn't mean everyone, just the ones who were overconsuming, right?  And who gets to decide who they are?  And in an economy in which my income is your spending (the fundamental macro identity, in case you were wondering) how are you going to cut the consumption of the “bad” people without starving the “good” ones?  It’s all simply bonkers.

And the other hole is that in an economy that operates on prices, as ours, for all its faults, clearly does, the economic quantity of consumption is not tethered to the physical quantity of resources people consume.  I know this first-hand: the faculty jobs we've cut would not have sped up the extraction of fossil fuels one iota—perhaps even the contrary if our climate specialist would have been ultra-persuasive.  Moreover, the students who acquire less education will not be saving the planet that way either.  (In case you were wondering: yes, education is part of GDP.)

Think about it: how can economic growth be “bad” and recessions, with all the cutbacks they entail, not be “good”?

And by the way, replacing a capital stock built up over decades in response to insanely low fossil fuel prices with one that runs sustainably is going to require a lot of economic activity—you know, GDP.

There was a companion article in the same issue by Chris Hayes that’s soooooo much better, and makes it crystal clear what an immense political task we have in front of us.  One place to start would be to stop doing dumb things, like telling the people whose support we’re trying to get that the solution is for them to have less.

Wednesday, April 30, 2014

A Supreme Cellphone Moment

The Supreme Court took up the question of whether police can search the cellphones of people they’ve arrested without first obtaining a warrant.  All the argument, according to the report in today’s New York Times, was about criminal cases, remote-controlled bombs or driving without a seatbelt.

Actually, there’s a far more important context, government suppression of peaceful demonstrations.  One of the standard tactics to emerge in recent years has been the mass arrest of demonstrators, scooped up by the hundreds, held in custody and then released a day or two later with charges dropped.  I’d like to see a successful legal challenge to this, but I’m not holding my breath.

If the court, in its wisdom, decides that routine searches of cellphones are permitted, it is only a matter of time before mass cellphone inspection becomes part of the routine.  Why settle for simply squashing a demonstration—why not find out the organizational structure, the informal networks, the pathways by which information and ideas are disseminated?  It’s all there in those little phones.

Monday, April 28, 2014

Minnesota Mafia Challenges Piketty

Let me begin by noting that I have as yet been unable to obtain a copy and so have not read Piketty's smash hit book yet.  However, I think that I know enough about what is in it to post on this particular matter.  Anyway, Tyler Cowen at Marginal Revolution has posted a challenge put to him by Tony Smith regarding Piketty's book that can be labeled as coming from the Minnesota Mafia.  The econ department at the University of Minnesota, along with the closely allied research department at the Minneapolis Fed, which have long had people going back and forth or simultaneously in both, has long been the real fountainhead of new classical real business cycle DSGE macro, even if some of those who initiated that movement there are either not there anymore (Prescott, and many former students) or no longer a follower of it (Kotcherlakota, now Minneapolis Fed President, although for its failure to predict the crash or say much useful about Fed policy) or both (Sargent).  Nevertheless, a major contingent remains in one or both of those places, including Chari, Kehoe, McGrattan, Rios-Bull, and several others, and their former students continue to identify strongly with the place while they are now all over the world.

The centerpiece of  post is a paper by Castaneda, Dias-Gimenez, and Rios-Bull (http://www.econ.umn.edu/~vr0j/papers/maxrefin.pdf) entitled, "Accounting for the U.S. Earnings and Wealth Inequality," published in the Journal of Political Economy in 2003.  Several other papers are linked, with the most impressive by Heathcote, Storeslettin, and Violante
, an overview paper of the broader approach from 2009.

These models are variations of DSGE models, except that they involve incomplete markets, with Heathcote, et al. calling this approach the "standard incomplete markets"(SIM) approach.  The difference from usual DSGE models is what markets are incomplete, in this case idiosyncratic uninsured risks.  So, the missing markets are insurance ones, and these are what eventually explains the development of inequality.  The other part needed is that there are heterogenous agents done in the Minnesota way, an interval on some variable, in this case discount factors.  The main paper also adds a social security mechanism.  With proper calibration, they claim to reproduce the pattern of income and wealth inequality in the US economy up to 2003.  Most controversially they claim that introducing an estate tax will make little change in wealth distribution, only raising the wealth Gini from .79 to .80.  Cowen is impressed.

So, what is going on here?  Heathcote et al lay out the various mechanisms and review the broader related literature.  Various models have thrown in as shocks family and human capital ones, with the missing insurance markets including finance, public, and some others.  There is even a policy claim that being able to separate initial condition effects from later shock effects suggest policies focused on early education to improve human capital or others focused on later "insurance."  Obviously this is very different from the reputed story that Piketty ultimately develops regarding dynastic families emerging in a "patrimonial capitalism" and his focus on the overall return to financial capital compared to the overall rate of growth.  The stories are extremely different.

In the end, what is really driving Minnesota models is this distribution of different discount factors.  So, it is at the bottom line that those on top are patient and willing to abstain while those at the bottom are short-sighted and impatient, tsk tsk.  In this view, obviously the losers deserve what they (don't) get, while the virtuous rich deserve theirs.  This is the Protestant Ethic triumphant!

Barkley Rosser

The Illusion of Marginal Productivity

Page 341:
"As noted, the theory of marginal productivity and of the race between technology and education is not very convincing..."
Class dismissed.

Sunday, April 27, 2014

Conditional Minimization of Type I Error and the Emergence of Scientific Sects

I think I've figured out a crucial missing link in the account of science I've long supported, and I want to put it in words while it’s clear in my mind.  What follows will be a quick sketch without much detail or example.

The overarching framework is that the key to science as a progressive human activity is its privileging of the goal of minimizing Type I error (false positives).  Research protocols can largely be explained according to this principle, including elaborate validation of methods and apparatuses and rules for replication and statistical significance.  These protocols are often fudged in the grimy day-to-day reality of research, but the stature of a field as scientific depends on containing these breaches.  There are two practical consequences of the strong bias against Type I error: one is that understanding of the objects of research can be expected to improve over time, the other that an immense division of labor can be supported, since specialists in one subfield can rely on and build upon the validated findings of other subfields.

So far so good, but how do we explain the sectarian division of research communities, much less the periodic Kuhnian emergence and overturning of entire paradigms?  How can science be progressive in the sense of the previous paragraph and yet support mutually inconsistent research programs over long periods of time?

Here is my tweak.  Classical Type I error minimization is unconditional; it seeks to prevent false positives that might arise from any form of mismeasurement, misspecification, misinterpretation and so on.  All potential sources of error are taken into account, and the goal is to reduce as far as possible the likelihood that a false positive could result.  The problem is that this can be a herculean task.  There are a great many potential sources of error, and it typically isn't possible to address each one of them.  A fallback position is conditional minimization of false positives.  This describes a strategy in which a (hopefully sparse) set of assumptions is adopted, and then Type I error is minimized conditional on those assumptions.  A research “sect” is a community that shares a common set of assumptions as a basis for minimizing Type I error from the remaining sources.  This is, I think, what Kuhn meant by “normal” science.

And where do these assumptions come from?  That’s a huge topic, which historians and sociologists of science love to study.  Once they are adopted, a set of assumptions is maintained providing the conditional minimization of false positives it permits looks enough to practitioners like the unconditional kind.  If you do everything you’re supposed to, minimizing Type I error conditional on the assumptions of your community, and your results still exhibit numerous and costly false positives, these assumptions become vulnerable to challenge.  Here too, of course, actual scientific practice can be closer or further from the ideal.  Some fields are aggressive at identifying anomalies; others train their adepts to not see them.

Seeing it this way allows me to acknowledge the good faith of practitioners whose assumptions differ from mine, providing they are honest about the conditionality of their work and willing to consider evidence that calls it into question.  They should expect this of me, too.  But skeptical historians of science tell us that self-awareness at this level is extremely rare, for personal and institutional reasons that are all too obvious.

Recognizing the necessity and ubiquity of conditional Type I error minimization makes me a bit more inclined to see economics as scientific.  I have complained in the past, for instance, about econometric work that doesn't so much test models as calibrate them.  Certain underpinnings of a model, like intertemporal utility maximization, are simply assumed, and then great amounts of statistical ingenuity go into devising and improving their empirical implementation.  I now see that this qualifies, at least formally, as conditional minimization of Type I error, and that, from within this research community, it sure looks like models are being progressively refined over time.  But I still think that economics rather stretches the boundaries of science in its willingness to cling to assumptions that, objectively considered, are extremely weak—inconsistent with the findings of other disciplines and at variance with observable fact.