Friday, November 14, 2014

Hatch’s Excuse to Repeal the Medical Device Excise Tax

I recently noted a report by Jane G. Gravelle and Sean Lowry of the Congressional Research Service:
A July 2014 report issued by the Treasury Inspector General for Tax Administration (TIGTA) found that the number of medical device excise tax filings and the amount of associated revenue reported are lower than estimated … The IRS estimated between 9,000 and 15,600 quarterly Form 720 tax returns with excise tax revenue of $1.2 billion for this same, two-quarter period. In other words, actual medical device tax collections were 76.1% of projected collections during this period.
I suggested that the shortfall could have been the result of transfer pricing abuse with respect to the constructive price – that is the arm’s length price that the manufacturing division of the larger medical device companies would charge the wholesale distribution division. If a wholesale distributor would get between a 25% and a 35% gross margin, then this constructive price would be between 65% and 75% of the actual price, which means the effective tax rate is really between 1.5% and 2.5% of the actual price and not 2.3%. I then asked how some of the major medical device companies might get away with paying half this estimated amount?
Because the Big Four accounting firms are arguing for discounts that are twice my answer. How on earth do they justify this extreme result? It is called the Cost Plus Method with production costs being 25% of sales and a contract manufacturer return equal to 5% of sales. Of course, the $3.5 billion difference between the Big Four answer and the 35% discount rate under the Resale Price Method represents the value of the product intangibles of the medical device manufacturer. Under arm’s length pricing, no manufacturer would fail to include this amount in their price to a distributor. So how are the Big Four writing these reports with a straight face? The answer is simple – they are advocacy reports based on the assumption that the IRS is stupid.
It seems the senior Senator from Utah has a different take, which reminds me why I tend to call him Whorin Hatch. From a recent Bloomberg BNA story:
“They'll say they need the money for Obamacare, but in all honesty they're going to get less money than they ever thought because a lot of the companies that can't make a profit but have this tax on their sales, they're going out of business or going offshore,” Hatch said.
Rest assured – Covidien, J&J, and Medtronic are not going out of business as they are incredibly profitable. And it does not matter that Covidien did a corporate inversion, the excise tax applies to any U.S. sales regardless of where the device is manufactured. But are we surprised that a Republican Senator turns a blind eye to tax evasion via manipulative transfer pricing and then uses the revenue shortfall as an excuse to gut the tax? While the rest of the world is concerned about Base Erosion and Profit Shifting, Republicans have gut the IRS budget and advocate getting rid of the repatriation tax. We have the ability to enforce the tax laws on the rich and the powerful providing we have the political will to do so. But with the Republicans in charge, this political will is nonexistent.

Thursday, November 13, 2014

Rules and Standards

Peter's discussion of policy rules reminds me of the tale of the highway design standard reputedly based on the height of dead dog in the middle of the road. University of Toronto Civil Engineering Professor, Ezra Hauer tells this story:
Recall that one of the parameters in the design procedure [for highway crest curves] is the height of the obstacle to be seen by the driver in time. Originally (already in 1940) American engineering standards set the obstacle height at 4". Those who wrote this standard did not have any particular obstacle in mind (although, rumour has it that some refer to it as the ‘dead dog’ criterion). …the 4" was selected not because lower obstacles are not a threat to safety but because the selection of a higher obstacle would not save much in construction cost. Since, at that time, nobody knew how many crashes are due to obstacles on the road, what kinds of obstacles these are, and what fraction of crashes would not have occurred had the crest been flatter, the standards committee did what was sensible. They made a decision on the basis of what was known, namely the cost of construction. 
For two decades everybody was designing roads using exacting calculations to make 4"-high obstacles visible in time to stop. Then, around 1961, it became apparent that in the newer model cars the average driver's eye was much lower than a decade or two earlier. Thus, drivers of newer cars could not really see 4" objects at the prescribed stopping sight distance... The solution to the predicament was not difficult. Since the 4" obstacle neither was some real object nor has it been selected on the basis of some factual relationship to safety, the Committee on Planning and Design Policies had no compunction noting that "the loss in sight distance resulting from lower eye height could be offset . . . by assuming an object higher than four inches . . . ” Indeed in the 1965 AASHO Blue Book, 6" obstacles became the standard of design. 
At the time the standard for the design of crest curves came into being, little was known about safety. Today we know that only 0.07 percent of reported crashes involve objects less than 6" high. We also know that, till today, no link has been found between the risk of collisions with small fixed objects on crest curves and the available sight distance. On the contrary, “Crash rates on rural two-lane highways with limited stopping sight distance (at crest curves) are similar to the crash rates on all rural highways.” Thus, the assumption invoked at the dawn of highway design history which allowed the formulation of a design procedure based on the avoidance of dead dogs in the middle of the road seems to have little to do with real road safety. Still, till today, the same standard stands, the same exacting but illusory constructs are used in the design of crest curves. Only the size of the dog and of other parameters is changing.

Rules versus Discretion in Macropolicy

Economists really like policy rules.  Remember the monetarist bubble of the late 70s?  Just follow a rule for steady expansion of M2, and let the rest of the economy take care of itself.  That didn’t turn out very well, partly because no single measure of the money stock is tightly linked to the outcomes people care about, like inflation and output.  So then we had inflation targeting (and still do in the eurozone), but it turned out there wasn’t a fixed long run NAIRU (non-accelerating inflation rate of unemployment) like we were promised, and simple inflation targeting was leaving too many outcomes of concern unmanaged.  More recently we’ve had the Taylor Rule, targeting inflation and unemployment.  Like the other rules, it aspires to robotic implementation: just multiply the output and inflation gaps by their corresponding coefficients, and out spits the central bank’s policy interest rate.  Simple!

Unfortunately, the evidence is piling up that this formula doesn’t do the job either.  Without endorsing their particular policy shifts, it’s clear that some central bankers, along with many economists, are now worried about the impacts of monetary policy on the potential for asset price bubbles.  Forestalling bubbles wasn’t a feature of policy prior to 2008, but now it has grabbed a lot of attention.  Meanwhile, it is clear that the Fed is unhappy with the headline unemployment rate as an indicator of labor market conditions, especially since stagnant wages seem to signal continuing slack.

The problems with policy rules are many.  The goals of macropolicy—the things we care about, like low unemployment, price stability, financial stability, satisfactory growth—don’t all move together, and their co-movements change over time.  The usefulness of the available indicators for progress on these goals fluctuates as well, since there is no single, perfect measurement of any of them, and the relationship between different indicators keeps changing.  What, for instance, is full employment, exactly?  Is it the same today as it was five years ago?  How should the target reflect  the changing composition of jobs, wage rates and labor force participation?  In addition, the effectiveness of policy instruments, like open market operations and fiscal balances, is contextual and uncertain; for a run of years slight alterations in central bank bond purchases can have powerful impacts on expectations and economic performance, and then you are in a zero lower bound world in which all sorts of exotic measures—the various types of quantitative easing—go mainstream.

So why are economists enamored of policy rules?  One reason is that they are seen as more efficient, since policy impact depends on how much sway it has over the public’s expectations of future variables, like inflation and aggregate demand.  Rules are favored, since they ostensibly remove uncertainty from policy stances.  You can trust the central bank or fiscal authorities to not reverse tomorrow what they’ve announced today because they’re just following rules that govern their actions, and the rules don’t change.  That’s a nice theory, but it doesn’t make the problem of policy credibility disappear; it just loads it all onto the commitment of policy-makers to follow whatever rule is currently in force.  But as we’ve seen, the rules have never performed in a predictable, satisfactory way, so simply pledging allegiance to the latest iteration doesn’t remove the rational doubt that today’s rule will be superceded by tomorrow’s, with corresponding skepticism toward policy consistency.

A darker motive is distrust of “politics”.  If we don’t constrain authorities to follow fixed rules, won’t they do stupid things under the influence of whatever special interests have captured them?  For some time the ruling fear was populism: if we don’t have ironclad policy rules, the temptation will always be for monetary and fiscal authorities to reach for short run, unsustainable increases in incomes.  The result will be periodic inflationary surges accompanied by boom-and-bust cycles whose harm vastly exceeds the momentary benefits of populist expansion.  (This was what Timothy Geithner no doubt had in mind when he told Christina Romer back in 2010 that fiscal stimulus is “sugar”.)  Ironically, however, the overriding political failure post-2008 has not been populism but its opposite, the craving for investor “confidence” by way of austerity.  Does this mean there is now a reason for the expansionistas to demand their own policy rule?

But rules don’t bypass politics; they are politics.  There are, after all, many candidate rules, and all of them have a tenuous relationship to how real economies function.  What’s the point of making rule selection rather than policy selection the object of political contest?

Make these arguments in the company of economists, and you are almost certain to hear, so wise guy, what’s your alternative?  If there are no rules, doesn’t this mean policy-makers are free to follow whatever absurd theory supports their own prejudices?  This is a bit like the claim, variously attributed to Dostoevsky and Nietsche, that if god is dead all is permitted.  But just as moral reasoning is possible without a sacred text, so is rational macropolicy without fixed rules.

It should be obvious, in fact.  Economies are extremely complex, evolving systems.  Interventions have uncertain consequences, and what holds at time A does not necessarily hold at time B.  Even observation is uncertain, and the relationship between what you think you see and what’s actually going on can change unexpectedly.  But this is also true for the natural world, and in the domain of restoration ecology and similar fields we have the paradigm of adaptive management.  This is an approach that takes complexity and evolution as starting points, emphasizing the role of learning and the need for flexible decision-making in response to ongoing feedback.  I would argue that, whether they know it or not, central bankers and other policy authorities are already operating in that mode.  The god of policy rules died some time ago, and they have no choice but to weigh data according to their current understanding and reconcile themselves to the error part of trial-and-error.  Bringing transparency and open debate to this process will make it better, and this means dropping the pretense, finally, that authorities can or should follow fixed rules.

Wednesday, November 12, 2014

Planning, the Theory of Growth and the Myth of Decoupling

Guardian: Rich countries subsidising oil, gas and coal companies by $88bn a year

Put in perspective, eighty-eight billion dollars probably isn't all that much compared to Gross World Product of roughly 1000 times that amount. It's the principle of the thing. And as Kin Hubbard has taught us, "When a fellow says, 'It ain't the money but the principle of the thing,' it's the money."

It IS the money. Subsidizing expanded fossil fuel extraction is consistent with economic growth theory. The subsidies are rational, given the priorities of the rich countries' governments (say what you will about $20 bottles of Two Buck Chuck and the tribal moralism of preaching against big cars).

Faced with the dilemma of building socialism in a backward, peasant society, Soviet economists in the late 1920's hit on the idea of accelerating investment in the means of production, Marx's Department I. "But it would be absurd to say that economic growth is a new subject," Evsey Domar confessed long ago:
In economic literature, growth models, interpreted broadly, have appeared a number of times, at least as far back as Marx. Of the several schools of economics the Marxists have, I think, come closest to developing a substantial theory of economic growth, and they might have succeeded had they given less time and effort to defending their master's virtue. Some highly elaborate and interesting growth models did, however, appear in Soviet literature.
"These Soviet models are more fully developed than similar attempts made in the West..." Domar continued in a footnote, "See, for instance, G. A. Feldman [Fel'dman], "K Teorii Tempov Narodnogo Dokhoda," Planovoe Khoziaistvo, November, 1928, pp. 146-170, and December, 1928, pp. 151-178... They were evidently written in response to immediate practical problems of planning."
The present interest [May, 1952] in growth is not accidental; it comes on the one side from a belated awareness that in our economy full employment without growth is impossible and, on the other, from the present international conflict which makes growth a condition of survival.
... 
Our problem can now be formulated as follows: assuming that output and capacity are in balance at the outset, under what conditions will this balance be preserved over time, or in other words, at what rate should they grow to avoid both inflation and unemployment.
Sounds like a plan! Now Sandwichman is not hostile to the idea of planning. After all, where would the Panama Canal be without a plan? Sandwichman just thinks that when you have a plan, you call it "The Plan," not "Growth." as if it is some spontaneous organic process that happens all by itself.

The problem with economics as a "positive science" is that it is no less prescriptive for all its disavowal. Economists lie. They wrap their ethical judgements in a patriotic flag of euphemism. Full employment is GOOD, right? Growth is GOOD, right? Therefore full employment is IMPOSSIBLE without growth! Growth is impossible without investment. Investment is impossible without profit. And profit is impossible without tax breaks to encourage fossil fuel discovery and development. What's normative about that? It is just the way it is.

In some Cloud Cuckoo Land it would be possible to decouple economic growth from greenhouse gas emissions. But in the planned economy we occupy, where full employment depends on growth, which depends on fossil fuel consumption, "decoupling" is a euphemism for "we have a secret plan to end the war."

A Double Irish Dutch Sandwich for EconoKash

Kash is back to blogging with a focus on transfer pricing:
Ireland has long been a favorite country for multinationals to set up shop in, thanks in part to its 12.5% corporate tax rate – one of the lowest in the world. A typical situation would be for a multinational based in the US or Asia to set up an Irish subsidiary as the principal entity from which to run its European business, thereby allowing it to legally record a significant portion of its European income in Ireland.
Actually 12.5% is far from being one of the lowest in the world but I’ve interrupted:
why does Germany treat Ireland so differently from Cyprus when it comes to providing financial assistance? One possible explanation is that the corporate tax rate in Cyprus, which had been set at 10%, was seen by Germany as being more egregious than Ireland’s rate.
But Cyprus may not be more egregious than Ireland as explained by Jesse Drucker:
Google cut its taxes by $3.1 billion in the last three years using a technique that moves most of its foreign profits through Ireland and the Netherlands to Bermuda. Google’s income shifting -- involving strategies known to lawyers as the “Double Irish” and the “Dutch Sandwich” -- helped reduce its overseas tax rate to 2.4 percent
The rest of this discussion is well worth the read. Look – international tax law can be challenging at times as effective rates can be much lower than statutory rates. A bigger challenge is why the national tax authorities allow transfer pricing mechanisms to shift so much income to places like Bermuda.

Tuesday, November 11, 2014

Two Types of Preferences and the Relevance of Cost-Benefit Analysis

Here is another in the string of posts inspired by my weekly class on cost-benefit analysis.  Last night’s topic was stated preference methods, like contingent valuation.  These are controversial because they are often used to put prices on things people don’t normally think of as having prices, like the “existence value” of whole species, pristine natural environments or the avoidance of risks to public health.

My view is that many of the confusions in economics can be traced to ambiguities in language.  We often use words to mean multiple things and then try to apply what works for one meaning to a different meaning, where it doesn’t.  Case in point: preferences.  I prefer A to B means I want state-of-the-world A to occur rather than state-of-the-world B, whether A and B are two pairs of shoes that could sit in my closet or two destinies for wild salmon along rivers that drain the Olympic mountains in northwestern Washington State.  They are similar in the sense that both pertain to my wanting something, but they are also different.

I propose two kinds of preferences based on different motivations.  One I will call normative; this reflects my judgments regarding what I deem to be right or wrong.  The other is experiential, what I want based on how I would personally benefit from it.  Economists sometimes say that ethical judgments are essentially experiential, since you derive pleasure from seeing right triumph over wrong, but I disagree.  Experiential preferences cause you to want A over B because A makes you happier or gives you more “utility”.  Normative preferences give you happiness or utility if a choice process selects A, and you believe A is ethically preferred to B.  These are clearly not the same thing.  In the first case utility is a cause, in the second an effect.

An example of a fundamentally normative preference is the one exercised by a jury deliberating a civil or criminal dispute.  It would be absurd to have verdicts determined by jurors expressing a willingness to pay to convict or acquit, and then adding up the totals to see which is greater.  This is because juries are supposed to deliberate based on a conception of justice, not on what’s in it for them, personally.  An example of a fundamentally experiential preference is the question of whether to publicly subsidize a sports stadium in a city.  Taxpayers’ preferences will be based on the degree to which the team that plays in the stadium gives them some sort of personal excitement, satisfaction or pride.  This could well be captured by a technique that measures their willingness to pay for the stadium.

Of course, preferences that are primarily normative can have a secondary experiential component, and vice versa.  In the stadium example, for instance, one effect of a subsidy is to transfer public money to private investors in professional sports teams.  This has an ethical aspect, which may play a role in how preferences are established.  In fact, in a society with glaring shortfalls in public programs for health, education and other essential services, like Brazil, the ethical component may become primary, as we saw in the protests over the World Cup.  Where public funds are not so constrained and the gaps not so severe, the decision turns on what the local population expects to derive, personally, from better facilities for professional sports, and questions of ethics are secondary.

Most existence values for environmental goods, I would argue, are essentially normative preferences.  They are about what people believe to be right or meritorious, not what gives them personal satisfaction.  Willingness to pay in these circumstances makes about as much sense as a decision tool as it does in jury trials.  We might be misled by elements of experiential preference that enter the mix, but our well-being as members of a society that makes choices of this kind is an effect, not a cause of what we wish to see happen.

If this analysis is correct, CBA can help us put numbers on the experiential aspects of a policy choice, recognizing that some other process is needed to assess its normative merits.

Sunday, November 9, 2014

Plug and Play: The "New" Welfare Economics

Lionel Robbins (1929) "The economic effects of variations of hours of labour":
"The days are gone when it was necessary to combat the naïve assumption that the connection between hours and output is one of direct variation, that it is necessarily true that a lengthening of the working day increases output and a curtailment diminishes it."
Enrico Barone (1908)"The ministry of production in the collectivist state":
"It is convenient to suppose – it is a simple book-keeping artifice, so to speak – that each individual sells the services of all his capital and re-purchases afterwards the part he consumes directly. For example, A, for eight hours of work of a particular kind which he supplies, receives a certain remuneration at an hourly rate. It is a matter of indifference whether we enter A's receipts as the proceeds of eight hours' labour, or as the proceeds of twenty-four hours' labour less expenditure of sixteen hours consumed by leisure."
So much for combating naïve assumptions. Apparently all one had to do back in 1938 to avoid combat was "suppose" conveniently what in days gone by had been assumed naïvely and that was enough to ground the "New" Welfare Economics in mathematical tractability. None of which would have worked if the connection between hours and output was not one of direct variation. A simple book-keeping artifice, indeed!

Did it matter whether or not the theoretical ball bearings upon which Budget Circular A-47 rolled were round? Of course not. No one ever read Barone. They just plugged his handy-dandy formula into theirs. But, hey, no interpersonal comparisons of utility were made. Sometimes you have to take a big leap of faith for Science.

Economics, History and Economic History, Misread

My curmudgeonly moment today is devoted to the latest issue of The Nation, which has published a review article by Timothy Shenk on several recently released books on the history of capitalism.  A standard complaint is that the author you’re criticizing has managed to make so many errors in so few lines, but Shenk’s review is bloated and circles endlessly around very little substance, so perhaps his ratio is more commonplace.  Still, the errors were annoying.

Unless you have too much time on your hands you won’t want to read the original, so here’s a short synopsis.  According to Shenk, economists and historians used to be cut from the same cloth, but they diverged in the twentieth century as economics became more technical and history more cultural.  Historians abandoned economics, and economists were interested only in issues related to national policies and economic growth.  Now a new cohort of “historians of capitalism” are boldly defining a sphere in which historians can explore the grand issues that economics has abandoned.  But the historians have identified capitalism as economic growth.  This has helped them make sense of institutions like slavery that were formerly seen as outside the capitalist penumbra, but it is problematic in other respects.  Economists and ecologists now agree that rapid growth is over, perhaps growth itself, so the new direction these historians are taking is of little value for the future.

I often resort to lists in these posts because I don’t have time to craft a proper essay that knits everything together.  It’s the same today.

1. Economic history as a subfield of economics has been ill-treated by the profession, but it has continued unabated.  What about Douglass North?  Cliometrics?  Business history?  The current fascination with the longue durée in economic life?  The history of finance?  Economic history is a massive enterprise and has asked every sort of question, large and small.

2. And historians never stopped debating the origins and meaning of capitalism.  There has been a vigorous literature on how to explain the divergence of Europe from the less dynamic trajectories of India and China in the early modern era and intense disputes over the evolution of living standards during the industrial revolution.  A lot of environmental history is also transparently a history of capitalism.  So also the history of science and technology.  So where does this idea that historians dropped the study of capitalism come from?

3. According to Shenk, the 1960s gave history its radicals committed to bottom-up narratives and economics its Friedmanites.  Actually, economics got its radicals too but had little institutional space for them.  And the market fundamentalists surged in the 1970s and ‘80s for largely unrelated reasons.

4. Normal long run per capita economic growth under capitalism is a modest 1-2%.  There are temporary exceptions in miracle economies and miracle decades, but the point Piketty and others are making is that mature capitalist economies should expect to see slow rates of growth in the future, as they had in the past.  Secular stagnation adds slower technological change and demographic transition to the mix.  The first is supply-side and the second results from demand since, as a population ages, its rate of investment falls.

5. Secular stagnation has nothing to do with the Malthusian fantasies of some parts of the environmental movement.  One could be true and the other false, or maybe they are both false.  Shenk’s reference to the end of “unlimited” economic growth gives away his confusion: economic growth is always limited by a wide range of factors including the cost of material inputs.  I’ve gone after the degrowth thing elsewhere and won’t take it up now, but I do want to register a complaint about the notion that the expectation (and fear) on the part of some economists that future economic growth will be sluggish has some connection to environmental beliefs that growth and ecological responsibility are incompatible.  They stem from completely different concerns, and they view growth in completely different ways.

It’s a sign of the times in the US that a house organ like The Nation has so few articles by economists and prints long (and I do mean long) pieces like this one about economics with no apparent fact-checking.

Incidentally, I’m interested in the books under review and would love to read something that discusses what they have to say.

"There is no such thing as a secondary benefit"

Arthur Maass, "Benefit-Cost Analysis: Its Relevance to Public Investment Decisions" (1966):
There is no such thing as a secondary benefit. A secondary benefit, as the phrase has been used in the benefit-cost literature, is in fact a benefit in support of an objective other than efficiency. The word benefit (and the word cost, too) has no meaning by itself, but only in association with an objective; there are efficiency benefits, income redistribution benefits, and others. Thus, if the objective function for a public program involves more than economic efficiency — and it will in most cases — there is no legitimate reason for holding that the efficiency benefits are primary and should be included in the benefit-cost analysis whereas benefits in support of other objectives are secondary and should be mentioned, if at all, in separate subsidiary paragraphs of the survey report.
… 
The executive agencies have painted themselves into the efficiency box. In 1950 the Subcommittee on Benefits and Costs of the Federal Inter-Agency River Basin Committee gave overwhelming emphasis to the efficiency ranking function in its now well-known “Green Book” report. In 1952 the Bureau of the Budget, in a Budget Circular that neither required nor invited formal review and approval by the Congress, nailed this emphasis into national policy, adopting it as the standard by which the Bureau would review agency projects to determine their standing in the President’s program. And soon thereafter agency planning manuals were revised, where necessary, to reflect this Budget Circular. In this way benefits to all became virtually restricted to benefits that increase national product. 
The federal bureaucrats, it should be noted, were not acting in a vacuum; they were reflecting the doctrines of the new welfare economics which has focused entirely on economic efficiency.

Saturday, November 8, 2014

Remedies Are Made of This... (cornmeal and potatoes edition)

"Mayor Wood of New York in 1857 suggested employing on public works everybody who would work, payment to be made one-quarter in cash and the balance in cornmeal and potatoes." -- Otto T. Mallery, "The Long Range Planning of Public Works," chapter XIV of Business Cycles and Unemployment, President's Conference on Unemployment, 1923.
Chapter XIX of John Maurice Clark's Studies in the Economics of Overhead Costs contains a section on "Remedies for the Business Cycle," in which Clark anticipated his later, much more extensive discussion in Planning for Public Works:
"For filling up the hollows [of the business cycle], the most positive and definite prescription is that government should plan an elastic schedule for public works of a postponable sort, and should save certain works to be prosecuted only in time of depression and unemployment, or prosecute the entire program more actively at such times."
Two years before Clark's book on overhead costs was published, President Warren G. Harding's Conference on Unemployment convened to consider how to relieve unemployment resulting from the 1921 depression. Commerce Secretary Herbert Hoover chaired the conference. Philadelphia playground pioneer Otto T. Mallery wrote the chapter on public works for the National Bureau of Economic Research's report to the conference.

After citing the opinion of the Minority Report of the 1909 Royal Commission on Poor Laws and Relief of Distress that "it is now administratively possible, if it is sincerely wished to do so, to remedy most of the evils of unemployment..." Mallery concluded his chapter with the observation that "flexible distribution of public works merits careful consideration as a factor in limiting the swing of the industrial pendulum and in lessening the shocks of unemployment." Thus was optimism kindled for combatting what John R Commons reckoned to be "the greatest defect of our capitalistic system, its inability to furnish security of the job."

Ninety-some odd years later and how are those "remedies for the business cycle" working out? This is not to suggest that the various remedies proposed in 1923 by the President's Conference -- unemployment insurance, counter-cyclical spending on public works, improved economic statistics, responsive monetary policy -- were inappropriate or ill-conceived. The conference report may even be viewed  as somewhat of a blueprint for the New Deal.

As time went by "various kinds of remedies" were replaced by aggregate demand management which was superseded by "real business cycle" focus on the supply side. Jean-Baptiste Say was rehabilitated. "If labour markets were allowed to function freely," the supply-side ideology claimed, "protracted unemployment would be cured automatically." In other words, the cure for unemployment is... unemployment.

Ninety-one years ago, Commons summed up the then prevailing interpretations of unemployment:
The older economists held that the elasticity of modern business was provided for in the rise and fall of prices through the law of supply and demand. But they assumed that everybody was employed all the time and that all commodities were on the markets and were being bought and sold all the time. If commodities in some directions were abundant then their prices would fall, which meant that the prices of other commodities would rise Then the disparity would equalize itself by capital and labor shifting from the low-priced and over-supplied industries to the high-priced and undersupplied industries. The rise and fall of prices through oscillations of demand and supply made the system elastic and harmonious. 
Seventy years ago Karl Marx came upon the scene with exactly the opposite interpretation. He rejected the law of demand and supply, with its oscillation of prices, and held that the elasticity of modem capitalism is found in the reserve army of the unemployed: Just as modern business must have a reserve fund in the banks and a reserve stock of goods on the shelves and in the warehouses, in order to provide for elasticity, so it must have a reserve army of that other commodity, labor, which it can draw upon in periods of prosperity and then throw upon its own resources in periods of adversity. 
It was seventy years ago, also, that modem trade-unionism started in England and America. It started on the same hypothesis of unemployment, but it retained the economist's doctrine of demand and supply. There is not enough work to go around [!], because the wage fund is limited, and therefore the workman must string out his job; must go slow; must restrict output; must limit apprenticeship, must shorten the hours, in order to take up the slack of the unemployed. 
This theory is not peculiar to labor unions. It is the common conviction of all wage-earners, burned into them by experience. Willing, ready and able to work, needing the work for themselves and families, there is no demand for their work. Trade unionists differ from unorganized labor in that they have power to put into effect what the others would do if they could. 
And who shall say that they are not right? Two years ago business men, newspapers, intellectuals, were calling upon the laborers to work harder; their efficiency had fallen off a third or a half; they were stringing out the jobs. Then suddenly several millions of them were laid off by the employers. They had produced too much. The employers now began to restrict output. Where labor restricted output in 1919 and 1920 in order to raise wages and prolong jobs, employers restrict output in 1921 in order to keep up prices and keep down wages.
The Marxian and trade-unionist critiques and prescriptions have been vanquished. Keynesian advocates of aggregate demand management are reduced to kibitzing from the sidelines. The "older economists" are back in the saddle. Everything old is new again. 

Or is it?
There's nothing you can do that can't be done
Nothing you can sing that can't be sung
Nothing you can say but you can learn how to play the game
It's easy
All you need is growth
All you need is growth
All you need is growth, growth
Growth is all you need
Is growth "all you need"? One hundred and five years ago, a Royal Commission minority surmised, "it is now administratively possible, if it is sincerely wished to do so, to remedy most of the evils of unemployment,.."

If it is sincerely wished to do so.

Those who insist there are no "limits to growth" seem to forget that the evils of unemployment have not been remedied -- even though it was believed by some, over a century ago, that it was administratively possible to do so. If, in more than one hundred years, unemployment could neither be remedied administratively nor "decoupled" from economic growth, what foundation does one have for faith that economic growth can be "decoupled" from carbon dioxide emissions or other natural resources and ecological impacts?

Or was that transition too sudden? What I am saying -- and have been saying all along -- is that there are not one but two couplings implicated in the environment/economy nexus. To say that GDP growth can be decoupled from natural resource consumption is to speculate about only one of those couplings. We have no data from the future that can confirm or deny such speculation.

We do, however, have data on the persistence of business cycle fluctuations that result in unemployment. Remedies for climate change face precisely the same political and ideological barriers as do remedies for the business cycle. There is no reason on earth that one would be given a free pass while the other is held hostage to rapacity.

Medical Device Excise Tax and Transfer Pricing

The Republicans ran on tax cuts for the rich – well the “job creators” even if getting tax breaks will not necessarily led to any new job creation. But it is time to reward the base:
First on the chopping block is likely the 2.3 percent tax on medical devices, such as hospital beds, MRIs, pacemakers or artificial joints. Soon-to-be Senate Majority Leader Mitch McConnell of Kentucky immediately targeted the tax in his victory speech Wednesday. The tax has been in place since 2013, and medical device lobbying groups have spent millions trying to gain support for repeal
Jane G. Gravelle and Sean Lowry of the Congressional Research Service just provided an economic analysis of the Medical Device Excise Tax (MDET), which may provide some ammunition for the proponents of repeal. Their report turns out to be a fairly balanced presentation of the issues. Page 8 points out something that needed further development:
A July 2014 report issued by the Treasury Inspector General for Tax Administration (TIGTA) found that the number of medical device excise tax filings and the amount of associated revenue reported are lower than estimated … The IRS estimated between 9,000 and 15,600 quarterly Form 720 tax returns with excise tax revenue of $1.2 billion for this same, two-quarter period. In other words, actual medical device tax collections were 76.1% of projected collections during this period.
Paul Jenks echoes this shortfall:
Through the first half of 2013, Treasury auditors estimate that the tax levy should have collected $1.2 billion in excise taxes, but the IRS has received $913 million.
Was the problem non-compliance by firms who did not realize they were covered or was there something else going on? What is missing is any discussion of the transfer pricing aspects. Richard Ainsworth, Andrew Shact, and Gail Wasylyshyn (ASW) provides a nice discussion of the constructive price issue:
Related party pricing - Will the traditional constructive price rules apply in the medical devices area, most notably the 75% valuation safe harbor rule under Revenue Ruling 80-273? Given the related party transactions typical in the medical device market and the probability that transactions may be structured among related parties to reduce exposure to the MDET, this issue could impact the revenue raised under this tax …The proposed regulations indicate that there should be a “… basic sales price [that] assume[s] that the manufacturer sells the taxable article in an arm’s length transaction (that is, in a transaction between two unrelated parties) to a wholesale distributor that then sells the taxable article to a retailer that resells to consumers.” The basic sales price presumes a traditional manufacturer-wholesaler-retailer-consumer marketplace. In cases where the basic sales price is not available, a constructive price will be determined, as an exception .. The medical device industry is exceedingly top-heavy. Although the Medical Device Manufacturers Association (MDMA) observes that 80% to 98% of the medical device manufacturers swept up in the MDET will be small businesses, 86% of the $20 billion the MDET is expected to generate over ten years will come from the ten largest firms. Thus, even though the MDET will be a considerable administrative burden for numerous companies, the government’s revenue will come primarily from the largest publicly traded multi-entity groups.
What does this all mean in terms of how much tax will be collected by MDET. Let’s assume a medical device manufacturer with $10 billion in U.S. sales per year. I’m thinking of this firm or the medical device division of this firm. MDET will not cost our firm $230 million at all. At most the tax bite would be only $172 million per year assuming these firms adopted the 75% safe harbor. But they are most likely paying less if not a lot less. I will argue shortly that the constructive price should more likely be only 65% of the end user sales price, which would mean our firm would be $150 million per year in MDET. The reality, however, is that they are only paying $69 million per year through a simple form of transfer pricing manipulation. The Big Four issued commentary on this issue similar to the following:
In the absence of further IRS guidance, taxpayers subject to the MDET who are concerned that the safe harbors set too high a constructive sales price or who have transaction patterns falling outside the safe harbors may decide to take approaches similar to those used elsewhere for U.S. federal income tax purposes to establish arm’s length prices. The taxpayer should first determine if it has any sales of a particular medical device at arm’s length at the same point in the distribution chain that can be used as a proxy to determine the constructive sale price for transactions with related parties involving such device. If not, taxpayers may be able to use certain transactional transfer pricing methods to support a more realistic constructive sale price for MDET purposes. The IRS has extensive guidance under the transfer pricing rules of section 482 on the “arm’s length principle” and permissible methods for establishing appropriate transfer pricing for U.S. federal income tax purposes. Very generally, the arm’s length principle of section 482 (including its various acceptable transfer pricing methodologies) is intended to produce a price that is consistent with the results of a transaction “if uncontrolled taxpayers had engaged in the same transaction under the same circumstances (arm’s length result).” Although the Treasury Department and the IRS made a curious comment in the preamble to the final MDET regulations suggesting that transfer pricing methods for section 482 purposes are not appropriate for MDET purposes, taxpayers may be skeptical about the statement’s significance given that constructive sales price rules and the transfer pricing rules generally have the same objective. Similar to the arm’s length principle articulated in section 482, the constructive sales price for purposes of a manufacturer’s excise tax (including the MDET) is “the price for which such articles are sold, in the ordinary course of trade, by manufacturers or producers thereof, as determined by the Secretary.”
That comment that section 482 principles are not appropriate for MDET purposes is indeed curious. I would argue that based on an appropriate application of the Resale Price Method that one could readily argue for a discount between 30% and 35%. The 30% argument comes from the observed gross margin of companies such as this one or this one. ASW also notes that if there is a service division as well as a sales division for a medical device manufacturer, their value-added should be excluded from the constructive price. Let’s assume these divisions combined operating expenses are 30% of sales and an appropriate operating margin is 5%. Then you get my 35% discount. So how would the medical device manufacturers pay only $69 million and not my $150 million? Because the Big Four accounting firms are arguing for discounts that are twice my answer. How on earth do they justify this extreme result? It is called the Cost Plus Method with production costs being 25% of sales and a contract manufacturer return equal to 5% of sales. Of course, the $3.5 billion difference between the Big Four answer and the 35% discount rate under the Resale Price Method represents the value of the product intangibles of the medical device manufacturer. Under arm’s length pricing, no manufacturer would fail to include this amount in their price to a distributor. So how are the Big Four writing these reports with a straight face? The answer is simple – they are advocacy reports based on the assumption that the IRS is stupid.

You are not authenticated to view the full text of this chapter or article.

Does not being authenticated mean I am a fake?

Friday, November 7, 2014

Neo-Fisherian Nonsense

Nick Rowe is drained by some weird new idea:
Figuring out the intuition behind John Cochrane's paper, to see what was really going on in his model, really drained me. Do I really have to wade through that Stephanie Schmidt-Grohe and Martin Uribe paper too, and reverse-engineer their result as well? I'm too old for this. Don't any of you young whippersnappers have an economic intuition? Do you all get snowed by every fancy-mathy paper that comes along?
If I’m wrong about this – let Stephen Williamson correct me but I thought he started down the neo-Fisherian train of thought because he realized that his (and others) prediction that Bernanke’s Quantitative Easing was not leading to hyperinflation. So when one’s model fails, turn to another model. And if one is somehow adverse to admitting Keynes got something right – come up with something that makes no sense. John Cochrane decides to weigh with a post I could spend all day making fun of:
At left is what we might call the pure neo-Fisherian view. Raise interest rates, and inflation will come. I guess there is a super-pure view which would say that expected inflation rises right away. But that's not necessary.
I’m sort of glad he laid this out and it makes my first take easier. This whole neo-Fisherian nonsense strikes me as turning Dornbusch overshooting on its head. Dornbusch would argue that an expansionary monetary policy pushed real interest rates down in the short-run with the excess aggregate demand slowly driving up inflation. The neo-Fisherites would have it raise real interest rates in the short-run. So how is this supposed to work? We get a reduction in aggregate demand which becomes inflationary? Nick isn’t the only one getting too old for this. To be fair to Cochrane, his later ramblings did include this:
In the standard view, a central bank would soon see inflation spiraling down, would quickly lower interest rates to push it back up again. Upside down, this might be a stylized view of the 1970s and 1980s.
My second – and more limited task – is to wonder whether Cochrane ever read Friedman and Schwartz:
This dog that did not bark has demolished a lot of macroeconomic beliefs: MV = PY. Sorry, we loved you. But when reserves go from $50 billion to $3 trillion and nothing at all happens to inflation -- or at most we're arguing about percentage points -- it has to go out the window.
Where to begin with this? Increases in the monetary base are not the same thing as an increase in the money supply – whether measured in terms of M1 or M2. Friedman and Schwartz noted that the Federal Reserve did increase the monetary base during the Great Depression but the money supply fell. Here the explosion in the monetary base during Quantitative Easing was met with a less than proportional rise in rise in the money supply. Taking the 7-year period from mid-2007 to mid-2014, nominal M2 rose by only 56.3%. OK – over the same period nominal GDP rose by a mere 20.36% with velocity falling from 1.99 to 1.54. But guess what – velocity fell during the Great Depression as well. Let’s take a few charts of this measure of velocity starting with the one provided by FRED. Does Cochrane see a stable velocity before the Great Recession? I don’t. John Mauldin apparently loves this Quantity Equation but notes:
let's look at the velocity of money for the last 108 years. Notice that the velocity of money fell during the Great Depression.
Velocity was 1.95 in 1918 but only 1.17 by 1932. Velocity’s variability over the Great Depression period was staggering. But once one realizes that velocity does not capture any agent’s behavior but is really nothing more than a silly definition, then why would any economist love it? Another observer of velocity during both the Great Recession and the Great Depression tried the following experiment:
Let's go back even further in time and look at another velocity of money using GDP and the annual St. Louis Adjusted Monetary Base ... instead of M2 …Notice that the drop in the velocity of money after the Great Recession is unprecedented....except, if you go all the way back to the Great Depression.
Call me an old fashion Keynesian if you will but I was never impressed with the Quantity Equation. And this neo-Fisherian nonsense strikes me as a waste of time.

Thursday, November 6, 2014

Microeconomics, From Physical to Metaphysical

Here is another in my continuing series of snippets from my class in cost-benefit analysis.

The prices and quantities actually traded are visible or nearly so.

The price elasticity of demand at the current market price is one degree removed from immediate visibility; it requires only a pair price/quantity points that occur in what you’re willing to accept as the same market at the (nearly) same time.

The demand curve is not visible, despite how often you see these curves in textbooks.  It can be estimated from visible market data with some additional assumptions about functional form, or from surveys conditional on the extent to which they are thought to reflect real potential market behavior.  It can be validated ex post for prices actually offered in markets.

Utility is not visible, nor can imputations of it be validated in any conceivable way.  It can be contradicted (and is) by measured subjective well-being, but this just means that MSWB is not the same as utility.  Utility has the same scientific stature as the soul.

The Great Electoral Wipeout of 2014

Like everyone else I know, I've been trying to interpret the mass annihilation of Democrats in the elections that just took place.  I have no particular expertise to apply, and in any case the data we need to test our hypotheses aren't available yet.  Right now, it’s all speculation.

First, let’s agree on the facts.  (1) At the national level the Democrats got creamed.  Their losses in the senate, when this is all over, will prove to be even greater than their most pessimistic pollsters imagined.  Even their victories, like Warner’s squeaker in Virginia (if it holds up), were signs of collapse.  And they got further clobbered in the House.  (2) Governorships and state legislatures went decisively Republican, with few exceptions.  This was a red tide all the way down.  (3) Turnout was low, but even where it wasn't (Colorado), it didn't make a significant difference.

Now for some hypotheses.

1. This was the biggest-spending nonpresidential election the country has ever seen, and a large proportion of the total was anonymous and unaccountable to anyone except political strategists.  There was wall-to-wall TV advertising in October.  My browser was popping with political ads for a state senate race that wasn't even in my district.  (Hey, don’t these guys know about zip codes?)  The bulk of the money was Republican, but of course we don’t know how effective all this spending actually was.  While I’m sympathetic to this howl of outrage by Jeffery Sachs, I think it’s premature to conclude that money was a big factor.  Look for a wave of research (and “research”) conducted by and for political operatives trying to convince donors to pump in even bigger bucks the next time around.

2. It was also an election of fear.  Polls have shown an obsession with ISIS and Ebola that can only be described as paranoia.  I’m not saying that there aren't nasty paramilitary groups around the world or scary diseases, just that the current moment isn't actually scarier overall than others we've lived through, but a substantial portion of the public is convinced that we are staring at the face of  Armageddon.  This was perhaps the main theme of late-campaign advertising and messaging, no doubt driven by the widespread belief that fear activates mental processes favorable to conservatism.  Whether that was a meaningful factor in the rout remains to be determined, however.

3. It was a scream of anger directed at Barack Obama, personally.  The intensity of this hatred is in about the same range as we saw with Bush Junior during the late stages of his presidency, but the causes are different.  With Bush it was above all the catastrophe of the Iraq invasion and the nonchalant dishonesty with which it was peddled, as well as the perception that his ignorance and lack of interest in the hard work of governance was revealed in the botched response to Hurricane Katrina.  To put it bluntly, he came across as an overly entitled frat boy, an easy target as it turned out.  With Obama it’s a little more complicated.  To begin, we can’t overlook the fact that hatred of Obama is largely a white phenomenon.  White people hating a black guy has to have a racial element.  Here’s a very speculative reading: Obama is a professional talker.  He has given us years of smooth talk about making government work for us, supporting the middle class, and managing international conflicts prudently and professionally.  But the reality has been a steadily declining median income, a lack of visible success in government programs, and continuing global chaos.  In the case of ACA/Obamacare, first there was the disaster of the website meltdown, which John Judis, based on polling data, describes as Obama’s Katrina, and then, due to the complexity of the program, the delay (at best) in the impact on health care utilization.  (How many voters have benefited yet from ACA in the form of actually getting health care they needed, reducing their medical spending, and not being enslaved to health insurance benefits when deciding what job to take or stay in?  This might kick in over time, and perhaps ACA will be given some credit for the decline in health cost inflation, but we’re not there yet.)  What I’m getting to is this: there’s a big gap between Obama’s rhetoric and the results on the ground, and this plays into a racial stereotype, the jiving black guy.  The performance of Obama explains a general disillusionment with what his wing of the Democratic party has come to represent, but the racial element explains the hatred.  It’s interesting that Obama has gotten trapped in this bind; clearly much of his original appeal was based on his being able to convince voters that even though he was black he wasn't angry or militant or anything like that.  And that’s still the case.  But he waltzed into a different, but just as toxic, racial stereotype, and since it’s based on not believing anything the man says any more, there’s nothing he can say to defuse it.  Worse, it tarred the entire party in the eyes of many white voters, since they suspected all along that the Democrats had become the party of those people, and now they knew it for sure.  In this context, it’s interesting that a fourth of Republican voters voiced displeasure with their own party in the exit polls, but they hated Obama and the Democrats more.

4. There’s no sign that the electorate shifted to the right on substantive political issues.  In fact, the evidence from referenda around the country, on marijuana, guns, abortion, and the minimum wage, is that, if anything, the swing is moderately to the left.  This election played out on ideological and cultural stereotypes.

5. Blaming the election results on low turnout is a distraction: turnout is, as we like to say, endogenous.  The voters Democrats depend on, younger, lower-income, nonwhite, weren't motivated, as they often aren't.  I think it’s presumptive to claim that their lack of interest is a technical problem to be solved by better outreach and mobilization.  No doubt more accurate targeting and so on can play a role, but surely the biggest element is that, unlike the Republicans, the Democrats don’t stand for anything their base is likely to get motivated about.  Worse, in power Democrats do stand for principles (privatizing education and more liberal and lucrative finance, to mention just two) that are anathema to large parts of their base.  It is not an exaggeration to say, as Arun Gupta does, that “it’s time to rethink this notion that Democrats lack principles. They have a clear agenda and are actually more ideological than Republicans. Democrats like Obama are willing to lose power to carry out the neoliberal agenda.”

To repeat, all of this is simply speculation.  These are hypotheses that can and hopefully will be put to empirical tests.  I especially hope we will have some experimental evidence on the racial dimension of Obama-loathing, since we’d be a lot better off as a society if we could talk about this stuff openly and honestly.  Oh, and we need a programmatic political movement with a post-neoliberal vision and agenda.