Conclusions and Recommendations
Our conclusions are of two sorts. A few are specific and not far reaching, and could, if approved, be put into effect fairly readily. Other's are based on more fundamental questions as to the economics involved in the procedures, and are so far-reaching as to call for further study, and presumably consultation between the interested agencies, before they could be embodied in working procedures. On certain points, we agree with majority positions, on others with the minority; but on some rather fundamental matters we dissent from both. We take the principle of the "with and without" comparison as controlling, but find that certain limitations on its application have prevented the procedures used from truly reporting the results of such a comparison, which they are supposed to report, or to be attempting to approximate.
We seek a method of determining what projects promises net surplus of national benefits over conditions as they would be without these projects; also of ranking different projects, and different increments of project scope, in order of relative economic justifiability. All parties appear to agree that the basic answer sought is the aggregate of quantitative differences in the national real income, with and without a given project, wherever such differences may occur and whether they are plus or minus; to which should be added consideration of qualitative differences and evaluation of them wherever possible. The inescapable difficulty, even for the quantitative differences, is that, for the ramifying secondary effects, accurate and definitive answers require omniscience. Lacking this, some things can still be measured, but not all the hypothetical effects of the presence or absence of a given project. Something short of measurement is inevitable. Methods of meeting this difficulty are subject to several criteria.
First, similar standards should be applied to different projects, and to both sides of the "with and without" comparison.
Second, these standards should be framed in the light of objectively valid conceptions of the essential cause-and-effect relationships and these conceptions should be kept clear and distinct from compromises of procedure that may be necessitated by limitations of evidence or otherwise. If this is done, a range will appear – in some cases quite a wide range – within which the answer may depend on judgment, without possibility of even relatively precise measurement.
Third, in this situation, without pretending an unreal precision, a different kind of end may be served if formulas can be devised calculated to yield results within the general order of magnitude which judgment suggests. Such formulas would at least make for uniformity and comparability, reducing the scope for the vagaries of personal equation or agency bias. This is in itself a weighty consideration.
Lastly, if at all possible, procedures should be simplified. One of the apparent vices of the present situation is the fact that some of the procedures are so complex and involved that the meaning of what lies behind a benefit-cost ratio is accessible only to a select few, even among. the initiated. This hampers what needs facilitating; namely, proper democratic scrutiny of the proposal of executive agencies. The material involves inescapable complexities and uniformity in handling it requires formulas of some sort, at some stage or stages. Democracy has to rely on technicians in matters inscrutable to the non-specialist, but preferably where the specialist is following a well-authenticated technique. In this case, the disagreements among the specialists are evidence that they do not possess such an authenticated technique, for the results of which a representative government can safely take their word. It needs to be able to tell what they are doing, and what their procedures mean.
All this creates a dilemma difficult to resolve, In this dilemma there appear to be two tenable alternatives, The more drastic method would be to abandon the attempt to measure secondary benefits. Computation would then be reduced to the furnishing of evidence on the basis of which secondary benefits may be appraised by a frank exercise of judgment. The less drastic method is to use formulas calculated to give results that fall within the range of reasonable judgment-estimates, but which are with equal frankness treated as rule of thumb, not as definitive measures.
In the first case, regional offices should furnish statistical evidence prepared under rules that are uniform within a given agency and at least comparable as between agencies. The first judgment-estimates would presumably be made in the central office of each agency, as a means to intra-agency uniformity; but inter-agency comparability and equity would be in the hands of higher authorities, who would need to have the evidence passed up to them in usable form. If this seems too indefinite – and the President's Water Resources Policy Commission has asked for greater definiteness and inter-agency uniformity – there is an arguable case for the view that it is better than spurious definiteness, that final decisions actually use judgment rather than implicitly following benefit-cost ratios calculated by different agencies in different ways, and that the suggested method would tend to implement the ultimate decision with better-prepared evidence than it now has. We see force in this view but we recognise also that the pressure for rules of thumb is strong, and the advantages of more definite uniformity great. We therefore propose continued use of formulae, regarded as rules of thumb, and altered from present Bureau practices, for "stemming" and "induced" benefits; but for some elements we propose that they be left to the exercise of judgment, without attempting quantitative measurement.