Discursive Collapse

In the second of what is turning into a great series of posts Richard Green has been discussing economic methodology with a bunch of us, most particularly Paul Frijters. In the last post Richard says this:

The 1st generation of work will come up with a mess of concepts. The second generation will pragmatically simplify this for preliminary work, but with caveats. Subsequent generations disregard both the 1st generation and the 2nd generations caveats.

You got it bro. It’s infuriating. And it happens again and again making economics a kind of idiot savant discipline. So many of the graduates are so full of technique that they are blissfully unaware of basic caveats about the limitations of the techniques they are taught. It’s not that their limitations are never discussed – they usually are in some perfunctory way, but they are not seriously integrated into the discipline in the way that you’d be made to feel a fool if you had slender grasp of them – and yet knowing the limitations of tools is surely as important as knowing how to use them – it is after all integral to choosing the right tools.

I call the phenomenon that Richard identifies ‘discursive collapse’ as explained below from some old, as yet unpublished work. Before I do so I’ll thank him for adding an example.

My arch typical example would be the path macroeconomics went on following IS-LM. Instead of being taken for what it was, a tentative attempt to formalise Keynes, it became a basis to build on, and the discipline suffered as a result. Despite the received notion of scientific theory having data sought and reality observed to test the theory, our models end up being reliant on what (we think) is observable.

In any event, here’s my outline of discursive collapse. (I’d probably write it up a little differently now and as a stand alone piece, with less emphasis on ‘apex value’ and more on formal method for its own sake but this will be adequate to give you the picture.)

Phillip Mirowski has commented that modern economic discourse has been intolerant of discussion which offers little promise of definitive conclusion. Take, for example, the discipline’s search for normative foundations upon which to base claims about economic welfare and efficiency. Here, economists’ search for an analytical foundation or “apex value” (Lindblom) to free economics from contention and value judgment, and so prepare the way for definitive closure, drove them to the Pareto criterion of welfare comparison. According to that criterion, the economic welfare of a group of people can be said to improve if the economic welfare (generally income or wealth) of some of them improves whilst the situation of none of them deteriorates. In many, but not all circumstances, the criterion is both harmless and useful as a sufficient condition for demonstrating a welfare improvement – although it does not avoid value judgment. But, lured by the putative uncontentiousness of the Pareto criterion, the discipline has gravitated towards using it more strongly, as something which is both necessary and sufficient for welfare improvement.

It is evident that such a fastidious criterion provides a debilitating limitation on welfare claims about the empirical world. Virtually any significant economic development or policy involves losses for some. Given this difficulty, makeshifts have been constructed around the idea of ‘winners’ compensating ‘losers’ to transform a situation which does not meet the criterion into one that does. The fastidiousness of the Pareto criterion is thus met. Alas this is ‘in theory’, but not in practice where administrative limitations, adverse selection, moral hazard, imperfect information, and political exigencies combine to ensure the virtual impossibility of compensation systems capable of delivering actual Pareto improvements.

Since the original purpose of this procedure is to forestall contention around competing values, there can be no reason for relaxing it at any stage in one’s reasoning from theory to practice in a particular policy situation. To do so later in the process is to raise the question of why one proceeded with such fastidiousness earlier. And raising the possibility of less fastidious foundations of welfare comparison is to raise the prospect that the analytic method of welfare economics – its construction from a (nearly) uncontentious foundation – may be of very limited use: that to be useful the skills of the welfare economist might need to be more modestly focused around what Self has called “the genuine rationalist role of clarifying issues and narrowing areas of disagreement”.

This is not, however, what has occurred. Instead the fundamentals of welfare economics have moved from the fruitful difficulties and turmoil associated with establishing and developing the Pareto criterion in the first half of this century to an unwholesome equilibrium. Debate about the theoretical underpinnings of concepts such as welfare and efficiency has gradually subsided while practice has oriented itself around one particular set of theoretical foundations which increasingly go unexamined and even unacknowledged. As Mishan complains:

The innocent layman might reasonably suppose that on so fundamental a concept as the criterion of economic efficiency there would be either basic agreement within the profession or else raging controversy, for unless such criterion can claim legitimacy the conclusions reached by economists in this vital area . . . cannot be taken seriously. In fact there is neither – only some desultory sniping from time to time. Among the many writers who have recourse to a criterion for ranking alternative organizations, few give much thought to the question of [the criterion’s] legitimacy.

This is the phenomenon which is here called ‘discursive collapse’. The discipline seeks, by way of various makeshifts, to deal with the radical insufficiency of its own theoretical foundations and techniques for generating compelling conclusions about the empirical world. In so doing, certain shortcuts are taken and the consequence is frequently some travesty of the original intentions. Thus in a recent discussion, one economist comments on policies which “could lead to a true Pareto improvement – one in which at least some sections of the community could be made better off but in which no (or few) sections were made worse off”. In this example the Pareto criterion now applies to sections of the community, and it applies give or take a few sections. The great bulk of welfare economics is now done by modelling the welfare of a community as a single consumer or by reference to an efficiency criterion based on ‘potential Pareto improvement’, each of which is analogous to defining welfare as gross wealth aggregated across individuals. Such work has valuable uses, but producing conclusions which do not depend on interpersonal comparisons is not one of them. Aggregating wealth across individuals is precisely the concept which the Pareto criterion offered to move beyond in its assessment of welfare!

Largely as a result of the fastidiousness of the Pareto criterion, when economists come to the inevitable questions concerning the respective interests of ‘winners’ and ‘losers’, they find themselves untutored by their discipline. Thus, ironically, it is the practitioners of a discipline which is underpinned by a fastidious commitment to avoiding interpersonal welfare comparisons who are so often in the vanguard of those urging sacrifices on some, in the interests of the ‘greater good’.

Postscript: See also this later post on discursive collapse.

This entry was posted in Economics and public policy. Bookmark the permalink.

9 Responses to Discursive Collapse

  1. Don Arthur says:

    One of the ways people avoid asking hard questions about welfare economics is by being extremely selective about how it is applied.

    There are social norms about how policy makers, economists and advocates apply tools like cost benefit analysis. But strangely, these norms are rarely made explicit.

    For example, why shouldn’t we use cost benefit analysis to decide whether public executions would be a good idea? We could do a willingness to pay survey to estimate the benefits of watching criminals being tortured and killed.

    Raise this example in a discussion with experts and they’ll roll their eyes. What a silly idea! Of course nobody would ever do this! But where are the criteria for deciding when it’s ‘appropriate’ to apply welfare economics and when it’s not?

    It seems to me that the techniques of welfare economics are rarely used as decision tools. Instead they are used as legitimation tools.

    When the techniques give answers that seem obviously wrong we all agree not to apply them. When they give answers we think look right we pretend that we’re not relying on our own values or opinions but on objective analysis.

    Here’s an example of how ad hoc judgments get mixed in with fastidious analysis:

    A caveat: the Hicks-Kaldor test requires only that a project have the potential for a Pareto improvement when combined with compensation payments. To require that compensation actually accompany the project is a more stringent condition that may conflict with egalitarian concerns. What if the winners from a project are poor, while those to be compensated (the would-be losers) are rich? Should the poor pay compensation to the rich?

    Is there some formal procedure for dealing with ‘egalitarian concerns’? Don’t be silly!

  2. “the techniques of welfare economics are rarely used as decision tools. Instead they are used as legitimation tools”. Amen to that.

    And its a very illustrative quote. Economics is more or less constantly playing that trick of more or less arbitrary, and certainly not properly argued for switches between the most casual kind of reasoning in both empiricism or theory and formal modelling.

    CBA is at least a worthwhile structuring of considering a matter, even though there are other (less hubristic) ways of trying to make decisions. Lindblom wrote some quite good things about this in the 1970s.

    And for the unwary, that last link of Don’s should have contained another piece of information (pdf) or perhaps (pdf pp.248). ;)

  3. Don Arthur says:

    Nicholas – I was also struck by your comment that:

    modern economic discourse has been intolerant of discussion which offers little promise of definitive conclusion.

    So when an economically trained analysis pulls a not properly argued for switch they refuse to discuss the issue. Often these ad hoc switches are the only thing that protects a particular brand of welfare economics from collapsing in the face of a reductio ad absurdum argument.

    The rationale seems to be that there’s no point arguing about something that can’t be resolved with data or deductive reasoning. As a result contentious policy prescriptions often hinge on unargued claims.

  4. Nicholas Gruen says:

    Just found this example of discursive collapse or of something coming to be assimilated within the discipline of economics in a way that is a travesty of its original intent – the Coase Theorem.

  5. Nicholas Gruen says:

    And another example – in a very different field – the cliché of taking “the road less travelled” is a travesty of Frost’s original poem which is a meditation on the mendacity of an old man retelling the story of his life. Yet the cliché lives on.

  6. Nicholas Gruen says:

    A link of relevance

  7. Nicholas Gruen says:

    Another nice example.

    The opinion that econometric theory is largely irrelevant is held by an embarrassingly large shareof the economics profession. The wide gap between econometric theory and econometric practice mightbe expected to cause professional tension. Infact a calm equilibrium permeates our journals and our meetings. We comfortably divide ourselves into a celibate priesthoodof statistical theorists on the one hand and a legion of inveterate sinner data analysts, on the other. The priests are empowered to draw up lists of sins and are re-vered for the special talents they display. Sinners are not expected to avoid sins; they need only confess their errors openly. [Leamer, 1978, p. vi]

    LeamerE,.E. “Specification Searches: Ad Hoc Inference with Non-Experimental Data”. New York: JohnWiley and Sons, 1978.

  8. Nicholas Gruen says:

    From 1984

    By 2050 – earlier probably – all real knowledge of Oldspeak will have disappeared … Chaucer, Shakespeare, Milton, Byron – they’ll exist only in Newspeak versions, not merely changed into something different, but actually changed into something contradictory to what they used to be.

  9. Nicholas Gruen says:

    A nice quote outing Samuelson for a bit of discursive collapse – or perhaps one might call it discursive disjunction

    The metaphysics you have while you breezily reassure readers “no metaphysics here … Move on”.

    Samuelson has independent utilities: “I assume no mystical collective mind that enjoys collective consumption goods” (1954, p. 387). But Samuelson posits a social welfare function of the Bergson-Samuelson type. This allows him to aggregate marginal utilities or marginal rates of substitution across individuals. Samuelson called his crucial aggregation condition, his equation 2, “the new element … which constitutes a pure theory of government expenditure on collective consumption goods” (1954, p. 3811; see also Musgrave 1983,1986). The most important point is that Samuelson is able to bypass what has come to be known as the revelation of preferences problem. This is because, contrary to his user-tionsdhere is a collective mind, an ethical observer to whom preferences are some-how known (Samuelson 1954, p. 388). Thus the revelation issue is bypassed. All that would be needed in his scenario is for taxes and transfers to “be varied until society is swung to the ethical observer’s optimum” (Samuelson 1954, p. 388).

Comments are closed.