I think I've figured out a crucial missing link in the account of science I've long supported, and I want to put it in words while it’s clear in my mind. What follows will be a quick sketch without much detail or example.
The overarching framework is that the key to science as a progressive human activity is its privileging of the goal of minimizing Type I error (false positives). Research protocols can largely be explained according to this principle, including elaborate validation of methods and apparatuses and rules for replication and statistical significance. These protocols are often fudged in the grimy day-to-day reality of research, but the stature of a field as scientific depends on containing these breaches. There are two practical consequences of the strong bias against Type I error: one is that understanding of the objects of research can be expected to improve over time, the other that an immense division of labor can be supported, since specialists in one subfield can rely on and build upon the validated findings of other subfields.
So far so good, but how do we explain the sectarian division of research communities, much less the periodic Kuhnian emergence and overturning of entire paradigms? How can science be progressive in the sense of the previous paragraph and yet support mutually inconsistent research programs over long periods of time?
Here is my tweak. Classical Type I error minimization is unconditional; it seeks to prevent false positives that might arise from any form of mismeasurement, misspecification, misinterpretation and so on. All potential sources of error are taken into account, and the goal is to reduce as far as possible the likelihood that a false positive could result. The problem is that this can be a herculean task. There are a great many potential sources of error, and it typically isn't possible to address each one of them. A fallback position is conditional minimization of false positives. This describes a strategy in which a (hopefully sparse) set of assumptions is adopted, and then Type I error is minimized conditional on those assumptions. A research “sect” is a community that shares a common set of assumptions as a basis for minimizing Type I error from the remaining sources. This is, I think, what Kuhn meant by “normal” science.
And where do these assumptions come from? That’s a huge topic, which historians and sociologists of science love to study. Once they are adopted, a set of assumptions is maintained providing the conditional minimization of false positives it permits looks enough to practitioners like the unconditional kind. If you do everything you’re supposed to, minimizing Type I error conditional on the assumptions of your community, and your results still exhibit numerous and costly false positives, these assumptions become vulnerable to challenge. Here too, of course, actual scientific practice can be closer or further from the ideal. Some fields are aggressive at identifying anomalies; others train their adepts to not see them.
Seeing it this way allows me to acknowledge the good faith of practitioners whose assumptions differ from mine, providing they are honest about the conditionality of their work and willing to consider evidence that calls it into question. They should expect this of me, too. But skeptical historians of science tell us that self-awareness at this level is extremely rare, for personal and institutional reasons that are all too obvious.
Recognizing the necessity and ubiquity of conditional Type I error minimization makes me a bit more inclined to see economics as scientific. I have complained in the past, for instance, about econometric work that doesn't so much test models as calibrate them. Certain underpinnings of a model, like intertemporal utility maximization, are simply assumed, and then great amounts of statistical ingenuity go into devising and improving their empirical implementation. I now see that this qualifies, at least formally, as conditional minimization of Type I error, and that, from within this research community, it sure looks like models are being progressively refined over time. But I still think that economics rather stretches the boundaries of science in its willingness to cling to assumptions that, objectively considered, are extremely weak—inconsistent with the findings of other disciplines and at variance with observable fact.
3 comments:
There is a simpler objection to the notion that a bias against false positives underpins scientific progress: it is easy to "fake" a hypothesis test by reverse engineering and it is even easier to delude oneself that one hasn't faked it.
Statistical testing is no defense against bullshit. Lies, damned lies and statistical significance.
The problem is not in considering Economics a 'Science', not at least as you define that endeavor. The problem is that Economics adopted a definition of Science current in the late 19th century in turn derived from Classical Mechanics. One that held that the fundamental issues had been solved in Physics and the answer was just to extend the methods of the hardest of 'hard sciences' to the other disciplines and then start adding decimal points to the calculated results. That is the idea was that Chemistry could be reduced to Physics, Bio-Chemistry to Chemistry, Biology to Bio-Chemistry and so on until it included the 'Social Sciences' including Psychology and Economics.
Unfortunately for Economics and its practitioners Physics itself abandoned this mechanistic and positivistic model under the hammer blows delivered by Einstein, Planck, Heisenberg et al and went over to a more contingent and best case model that rejected Postivism for Explanatory Power.
But it seesm to me that Economists never caught on to this profound shift in understanding of the very nature of science and clung to their 1880's version. As if everything could ultimately be solved simply be elaborating their versions of F=MA with enough mathematical and econometric precision.
20th century Physicists and Mechanical Engineers are fully aware they are working from 'good enough' models. You don't have to take into account every or really any quantum variation or time delation effect to land a spacecraft on Mars at a time and place of your choice. A few of the larger ones maybe, but mostly they just won't apply at the scale they are operating in.
Economics could operate more explicitly like this. There is no need to calculate the world economy by solving equations that start with the six or seven billion variables that are the worlds peoples. But equally you can't just assume that a handful of accounting equations are the equivalent of natural laws and like Euclid or Newtown just work from there. In Popperian terms you cannot just work World to Word at some point you need to go back and test Word AGAINST World and see if your current model really is 'Good Enough'.
Empiricism isn't everything but you simply can't excempt a discipline from testing and call it 'Scientific'. No matter how neatly your four variable interact on a white board.
Interesting post. There is something a bit counter-intuitive about equating the minimization of Type I errors with progress. With the consolidation of a theory during periods of normal science, yes but that seems like an incomplete notion of progress.
Post a Comment