Academia: from inefficient effectiveness to efficient ineffectiveness

If, as I think, academia has gone from being inefficient but effective to being efficient but ineffective (a proposition I won’t defend here), the mechanism for making the switch was going from embodied cognition to abstract Cartesian cognition, or to be more precise from a rich to a shallow and superficial form of embodied cognition. Along the way a God’s eye view of the sector replaced a system in which the thinking and doing was deeply embedded in and emergent from the system.

The most important thing an academic system must do is determine relative academic merit. Alas, it’s also the hardest thing to do. Here we are at the forefront of human knowledge where literally every next step, if it’s worthwhile, is two things. It’s at the forefront of its field – which may require a substantial amount of learning and specialisation even to understand. And it’s uncertain as to its its outcome – as a rule radically so.

In this situation, the academic system we had in the 1950s was built around a centuries-old institution – the university. At least in its idealised form expressed by the conservative political theorist Michael Oakeshot, a university was “a corporate body of scholars … a home of learning, a place where a tradition of learning is preserved and extended”. Oakeshot’s description of the nature of scientific endeavour within universities helps clarify how potentially momentous our reform might have been:

Scientific activity is not the pursuit of a premeditated end; nobody knows or can imagine where it will reach. There is no perfection, prefigured in our minds, which we can set up as a standard by which to judge current achievements. What holds science together and gives it impetus and direction is not a known purpose to be achieved, but the knowledge scientists have of how to conduct a scientific investigation. Their particular pursuits and purposes are not superimposed upon that knowledge, but emerge within it. 

In any event, the way this system identified and promoted academic merit was within the broad outlines of the late 19th and early 20th-century notion of professionalism. One generally needed to qualify for admission to the guild of academics with one’s educational attainments (generally a bachelors degree until the 1960s) whereupon one proceeded towards higher status positions which were also more highly and more securely rewarded. More senior academics identified the best of their juniors for support and promotion. The best got the long-term career reward of internal satisfaction and the approbation of those they respected – the very wellsprings of what Adam Smith thought drove a good life in a good society.

We can’t say how good this system was at selecting the best but it seems to have been tolerably effective at allowing the best researchers, or most of them, freedom to pursue their passions. However, there were myriad ways in which the system didn’t work as the ideal suggests it should. Just as lawyers typically come to serve their own interests ahead of the public interest in justice or their client’s need for justice at reasonable cost, academia was inefficient, often failing to put the public interest ahead of academics’ comfort in what they’d grown used to. In addition, crucial public goods on which science is built – such as peer review and the replication of previous studies – went unfunded.

Then came reform. Though it was ostensibly pursued to promote the public interest, and though to this day university research is overwhelmingly funded by the public purse and philanthropy, reformers’ imagination didn’t run to addressing these problems. Reform seems to have exacerbated the latter problems relating to the lack of explicit support for the public goods of academia.

Instead it ‘solved’ the apex problem of identifying academic merit by grabbing the nearest thing to hand – citation metrics. To put it another way, it didn’t start from where it was – with a difficult problem which was being tolerably solved by an existing institution but which could clearly be improved upon – with a thoughtful examination of the problems, a search for potential improvcements which were then slowly winnowed out and worked up into actual improvements.

Instead, it made a beeline for a God’s eye view of the problem. What would God want from the university system? Why He’d want optimality. He’s a pretty optimal kind of guy himself. So he’d want this system to reward the best. The best universities and the best academics. Well, that should be pretty straightforward. Let’s look around. Journal citations look like they do the trick. And they’re even quantitative, so they can all be added up and Bob’s your uncle. What could possibly go wrong? Of course, lots of things could go wrong and go wrong they have, and go wronger they will as the process not only becomes embedded but triggers Goodhart’s Law.

There’s a deep irony here. Economists exalt the way markets avoid this mistake of having some source, however authoritative, picking winners. Rather, the selection of winners is the emergent product of many different forms of valuation and action from many different perspectives. Yet reform of the higher ed sector is driven by economists’ and policymakers’ fondest imaginings that they’re moving towards a market-based system.

In all this what’s happened is illustrated by the image above in which birds wings are fitted to a plane. Birds’ wings played an important role in early aviators’ figuring out how to get machines to fly. But, as a degree of thoughtfulness would lead one to expect, simply taking some features of a market and grafting it into another, quite different system might make it better or worse. But when things need to be finely adapted, one would surely expect it to make things worse. For the crafting of each part of a plane and each part of a bird are highly crafted and crafted as part of a whole. Transfering the insights that birds’ wings might give one into flying will need a lot of work of the kind that led to the evolution of birds’ wings and the development of planes. One is seeking to use an insight from a mechanism in one domain in another domain which operates according to quite different principles. One might as well transplant a dog’s leg onto a Thylacene’s body and imagine it would work effectively.

Yet that’s what we’ve done in one area after another. And called it economic reform.

 

This entry was posted in Economics and public policy, Education. Bookmark the permalink.

7 Responses to Academia: from inefficient effectiveness to efficient ineffectiveness

  1. paul frijters says:

    I like citations because they make me look better than any other metric does :-)

    But indeed, the move towards measuring academic performance with indicators has lead to a lot of undesirable outcomes in social science. The deeper problem is that academia is still self-judging (citations are an internal beauty contest) and that it is so lucrative. The combination of both will always lead away from the public interest as the game orients towards building territories in the lucrative space.

    Really not clear what would work better. More metrics? Evaluations from outside the club? Institutionalization of diverse schools of thought to retain variation?

    Also not clear who has actual agency in this. Lots of top scientists complain about the current system but are powerless to change it. There are strong lock-in effects as everyone is making investments towards the current situation, with the winners having a lot to loose from change. Also, the metrics are now getting more important as new players use them as quality indicators and buy up people with high metrics (just look at what the universities in the gulf or in high-end Chinese universities look for from foreign academics). So if anything, it is getting worse fast.

  2. Nicholas Gruen says:

    Paul, the thing that makes you look better than any other metric is your carefully coiffed shaven head.

    But as to your serious points, firstly if we’re to address things centrally, the very least the centre and the funders can do is to fund the public goods of academia properly – which certainly includes peer review and replication, though in the age of the internet there should be radical changes in the way peer review works. I’d be ditching having the ‘publications’ as gatekeepers and have the peer review out in the open as an add on service to open, easy, blog style publication.

    On metrics, you seem to be addressing my points from inside the bubble – assuming that ‘the system’ must solve it in a centralised way by proposing alternative metrics. What I’ve proposed is that things have got worse pursuing this course (though it’s hard to be definitive about that). That they were better left alone – inefficient but reasonably effective.

    I mean it doesn’t seem very implausible that the mediocre – which the system will default to – won’t be very good at governing the great. If we want greatness in intellect in universities, then maybe we need to accept that we shouldn’t be treating the few great people there like hamsters on a hamster wheel.

    Still, I know that won’t satisfy you, and it doesn’t satisfy me very much either.

    So here’s another thought that I think does address your concern somewhat.

    It seems to me that the ‘market’ in academia such as it is has produced something very similar to the ‘market’ in other areas of culture which is that it maximises the satisfaction of our wants and starves our needs or secondary wants – the things we either do or should want to want.

    I think there are some ways of opening up a space of governance to allow such things.

    One thing that I seem to think is much more momentous than anyone else is the use of means of merit selection that interdict self-assertion as in the mechanism explained here. Other great successes of governance have been built on similar mechanisms of self-denial such as Cromwell’s new model army, and in many ways the 19th century law of agency and fiduciary duties follows this idea.

    I think such mechanisms could be used to rotate people through governance roles who were nevertheless felt by their peers to be highly meritorious. But the rotation would detach their self-interest from their governance. Anyway, since this is a comment and not a post, I won’t try to work it all out here. One would need to experiment. So at least one take out is that we need to create space in the system for such experiments to take place and for their consequences to be understood.

  3. Conrad says:

    I agree with Paul on this. There are any number of perverse and terrible outcomes from the current use of quantitative metrics to determine if qualitatively different things are better/worse (which is of course generally senseless — it’s like asking if 10 bananas are better than 4 bolts and concluding that they are because 10 is bigger than 4 and giving the grower of bananas some money and letting the maker of bolts go bankrupt, because who needs a few bolts when you can have more bananas). That would go for teaching metrics as well as research ones.

    Your idea of rotating positions is a good one, which of course used to exist, but this would be almost impossible in the current system because any number of often largely worthless administrators (and VCs) would need to give up their large salaries.

    I think Paul’s idea of institutionalizing some areas is reasonable, although it is hard to see how emerging areas which are often important and do the worst at the citation game would fare. These are generally areas that don’t have well developed journal systems that get into the citation/impact-factor game.

    In an area related to mine, that would have included language technology for the web, which never really got off the ground in Australian universities because those guys never really publish in journals (and hence people either moved to web companies or had to change areas when they were punished by their administrators). Alternatively, it would have saved linguistics in general, because those guys never publish much as when they do, the expectation of their journals is they actually do something meaningful, unlike many areas where you can have a permute-the-boring-research with PhD students strategy and your friends so you can publish vast quantities of junk (one of our recent university award recipient I think published 40 papers last year — I would have thought that would constitute scientific fraud, as another one of our researchers got charged for for recently).

    • Nicholas Gruen says:

      Thanks Conrad,

      I’d like to take the next step in this discussion, but I’m not sure precisely what you agree with Paul about and the extent – if any – that this implies you disagree with me.

  4. David Pollock says:

    This has nothing to do with academia but does involve bird wings.
    Years ago I saw a b/w silent film of an inventor halfway up the Eiffel Tower with wings attached to his arms and being urged to jump/fly by an admiring crowd. He was obviously having second thoughts but was too far up the critical path to back out. His third thought is not recorded.

  5. Fencing Spokane Wa says:

    Determining academic merit is interesting. Haha, grafting bird wings to a plane and calling it reform is an interesting visual that I think hits the nail on the head. Leo

  6. Moz of Yarramulla says:

    The trick is to start publishing as soon as possible…

    More South Korean academics caught naming kids as co-authors. The practice was probably used to improve the children’s chances of securing a university place.

    https://www.nature.com/articles/d41586-019-03371-0?hss_channel=tw-18198832

    Goodharts Law also arguably breaks economics because if you’re going to use money as a measure of goodness you can’t also use it to reward people for being good. Ooops.

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.