Academia: from inefficient effectiveness to efficient ineffectiveness

If, as I think, academia has gone from being inefficient but effective to being efficient but ineffective (a proposition I won’t defend here), the mechanism for making the switch was going from embodied cognition to abstract Cartesian cognition, or to be more precise from a rich to a shallow and superficial form of embodied cognition. Along the way a God’s eye view of the sector replaced a system in which the thinking and doing was deeply embedded in and emergent from the system.

The most important thing an academic system must do is determine relative academic merit. Alas, it’s also the hardest thing to do. Here we are at the forefront of human knowledge where literally every next step, if it’s worthwhile, is two things. It’s at the forefront of its field – which may require a substantial amount of learning and specialisation even to understand. And it’s uncertain as to its its outcome – as a rule radically so.

In this situation, the academic system we had in the 1950s was built around a centuries-old institution – the university. At least in its idealised form expressed by the conservative political theorist Michael Oakeshot, a university was “a corporate body of scholars … a home of learning, a place where a tradition of learning is preserved and extended”. Oakeshot’s description of the nature of scientific endeavour within universities helps clarify how potentially momentous the reform we’ve undertaken might have been:

Scientific activity is not the pursuit of a premeditated end; nobody knows or can imagine where it will reach. There is no perfection, prefigured in our minds, which we can set up as a standard by which to judge current achievements. What holds science together and gives it impetus and direction is not a known purpose to be achieved, but the knowledge scientists have of how to conduct a scientific investigation. Their particular pursuits and purposes are not superimposed upon that knowledge, but emerge within it. 

In any event, the way this system identified and promoted academic merit was within the broad outlines of the late 19th and early 20th-century notion of professionalism. One generally needed to qualify for admission to the guild of academics with one’s educational attainments (generally a bachelors degree until the 1960s) whereupon one proceeded towards higher status positions which were also more highly and more securely rewarded. More senior academics identified the best of their juniors for support and promotion. The best got the long-term career reward of internal satisfaction and the approbation of those they respected – the very wellsprings of what Adam Smith thought drove a good life in a good society.

We can’t say how good this system was but it seems to have been tolerably effective at allowing the best researchers, or most of them, freedom to pursue their passions. However, there were myriad ways in which the system didn’t work as the ideal suggests it should. Just as lawyers typically come to serve their own interests ahead of the public interest in justice or their client’s need for justice at reasonable cost, academia was inefficient, often failing to put the public interest ahead of academics’ comfort in what they’d grown used to. In addition, crucial public goods on which science is built – such as peer review and the replication of previous studies – went unfunded.

Then came reform. Though it was ostensibly pursued to promote the public interest, and though to this day university research is overwhelmingly funded by the public purse and philanthropy, reformers’ imagination didn’t run to addressing these problems. Reform seems to have exacerbated the latter problems relating to the lack of explicit support for the public goods of academia.

Instead it ‘solved’ the apex problem of identifying academic merit by grabbing the nearest thing to hand – citation metrics. To put it another way, it didn’t start from where it was – with a difficult problem which was being tolerably solved by an existing institution but which could clearly be improved upon – with a thoughtful examination of the problems, a search for potential improvcements which were then slowly winnowed out and worked up into actual improvements.

Instead, it made a beeline for a God’s eye view of the problem. What would God want from the university system? Why He’d want optimality. He’s a pretty optimal kind of guy himself. So he’d want this system to reward the best. The best universities and the best academics. Well, that should be pretty straightforward. Let’s look around. Journal citations look like they do the trick. And they’re even quantitative, so they can all be added up and Bob’s your uncle. What could possibly go wrong? Of course, lots of things could go wrong and go wrong they have, and go wronger they will as the process not only becomes embedded but triggers Goodhart’s Law.

There’s a deep irony here. Economists exalt the way markets avoid this mistake of having some source, however authoritative, picking winners. Rather, the selection of winners is the emergent product of many different forms of valuation and action from many different perspectives. Yet reform of the higher ed sector is driven by economists’ and policymakers’ fondest imaginings that they’re moving towards a market-based system.

In all this what’s happened is illustrated by the image above in which birds wings are fitted to a plane. Birds’ wings played an important role in early aviators’ figuring out how to get machines to fly. But, as a degree of thoughtfulness would lead one to expect, simply taking some features of a market and grafting it onto another, quite different system might make it better or worse. But when things need to be finely adapted, one would surely expect it to make things worse. For each part of a plane and each part of a bird are a highly crafted part of a highly crafted whole. Transfering the insights that birds’ wings might give one into flying machine needed a lot of work. That’s the work that evolution already did on the birds wing and which airline engineers did in developing planes. One is seeking to use an insight from a mechanism in one domain in another domain which operates according to quite different principles. One might as well transplant a dog’s leg onto a Thylacene’s body.

Yet that’s what we’ve done in one area after another. And called it economic reform.

 

This entry was posted in Economics and public policy, Education. Bookmark the permalink.

20 Responses to Academia: from inefficient effectiveness to efficient ineffectiveness

  1. paul frijters says:

    I like citations because they make me look better than any other metric does :-)

    But indeed, the move towards measuring academic performance with indicators has lead to a lot of undesirable outcomes in social science. The deeper problem is that academia is still self-judging (citations are an internal beauty contest) and that it is so lucrative. The combination of both will always lead away from the public interest as the game orients towards building territories in the lucrative space.

    Really not clear what would work better. More metrics? Evaluations from outside the club? Institutionalization of diverse schools of thought to retain variation?

    Also not clear who has actual agency in this. Lots of top scientists complain about the current system but are powerless to change it. There are strong lock-in effects as everyone is making investments towards the current situation, with the winners having a lot to loose from change. Also, the metrics are now getting more important as new players use them as quality indicators and buy up people with high metrics (just look at what the universities in the gulf or in high-end Chinese universities look for from foreign academics). So if anything, it is getting worse fast.

  2. Nicholas Gruen says:

    Paul, the thing that makes you look better than any other metric is your carefully coiffed shaven head.

    But as to your serious points, firstly if we’re to address things centrally, the very least the centre and the funders can do is to fund the public goods of academia properly – which certainly includes peer review and replication, though in the age of the internet there should be radical changes in the way peer review works. I’d be ditching having the ‘publications’ as gatekeepers and have the peer review out in the open as an add on service to open, easy, blog style publication.

    On metrics, you seem to be addressing my points from inside the bubble – assuming that ‘the system’ must solve it in a centralised way by proposing alternative metrics. What I’ve proposed is that things have got worse pursuing this course (though it’s hard to be definitive about that). That they were better left alone – inefficient but reasonably effective.

    I mean it doesn’t seem very implausible that the mediocre – which the system will default to – won’t be very good at governing the great. If we want greatness in intellect in universities, then maybe we need to accept that we shouldn’t be treating the few great people there like hamsters on a hamster wheel.

    Still, I know that won’t satisfy you, and it doesn’t satisfy me very much either.

    So here’s another thought that I think does address your concern somewhat.

    It seems to me that the ‘market’ in academia such as it is has produced something very similar to the ‘market’ in other areas of culture which is that it maximises the satisfaction of our wants and starves our needs or secondary wants – the things we either do or should want to want.

    I think there are some ways of opening up a space of governance to allow such things.

    One thing that I seem to think is much more momentous than anyone else is the use of means of merit selection that interdict self-assertion as in the mechanism explained here. Other great successes of governance have been built on similar mechanisms of self-denial such as Cromwell’s new model army, and in many ways the 19th century law of agency and fiduciary duties follows this idea.

    I think such mechanisms could be used to rotate people through governance roles who were nevertheless felt by their peers to be highly meritorious. But the rotation would detach their self-interest from their governance. Anyway, since this is a comment and not a post, I won’t try to work it all out here. One would need to experiment. So at least one take out is that we need to create space in the system for such experiments to take place and for their consequences to be understood.

  3. Conrad says:

    I agree with Paul on this. There are any number of perverse and terrible outcomes from the current use of quantitative metrics to determine if qualitatively different things are better/worse (which is of course generally senseless — it’s like asking if 10 bananas are better than 4 bolts and concluding that they are because 10 is bigger than 4 and giving the grower of bananas some money and letting the maker of bolts go bankrupt, because who needs a few bolts when you can have more bananas). That would go for teaching metrics as well as research ones.

    Your idea of rotating positions is a good one, which of course used to exist, but this would be almost impossible in the current system because any number of often largely worthless administrators (and VCs) would need to give up their large salaries.

    I think Paul’s idea of institutionalizing some areas is reasonable, although it is hard to see how emerging areas which are often important and do the worst at the citation game would fare. These are generally areas that don’t have well developed journal systems that get into the citation/impact-factor game.

    In an area related to mine, that would have included language technology for the web, which never really got off the ground in Australian universities because those guys never really publish in journals (and hence people either moved to web companies or had to change areas when they were punished by their administrators). Alternatively, it would have saved linguistics in general, because those guys never publish much as when they do, the expectation of their journals is they actually do something meaningful, unlike many areas where you can have a permute-the-boring-research with PhD students strategy and your friends so you can publish vast quantities of junk (one of our recent university award recipient I think published 40 papers last year — I would have thought that would constitute scientific fraud, as another one of our researchers got charged for for recently).

    • Nicholas Gruen says:

      Thanks Conrad,

      I’d like to take the next step in this discussion, but I’m not sure precisely what you agree with Paul about and the extent – if any – that this implies you disagree with me.

  4. David Pollock says:

    This has nothing to do with academia but does involve bird wings.
    Years ago I saw a b/w silent film of an inventor halfway up the Eiffel Tower with wings attached to his arms and being urged to jump/fly by an admiring crowd. He was obviously having second thoughts but was too far up the critical path to back out. His third thought is not recorded.

  5. Fencing Spokane Wa says:

    Determining academic merit is interesting. Haha, grafting bird wings to a plane and calling it reform is an interesting visual that I think hits the nail on the head. Leo

  6. Moz of Yarramulla says:

    The trick is to start publishing as soon as possible…

    More South Korean academics caught naming kids as co-authors. The practice was probably used to improve the children’s chances of securing a university place.

    https://www.nature.com/articles/d41586-019-03371-0?hss_channel=tw-18198832

    Goodharts Law also arguably breaks economics because if you’re going to use money as a measure of goodness you can’t also use it to reward people for being good. Ooops.

  7. Alan says:

    The Romance of the three Kingdoms famously opens by saying the empire once united must divide, and once divided must unite.

    Before the Mongol conquest in 1271, only 5 dynasties – Qin, Han, Jin, Sui and Tang –had ever unified China. Qin and Jin lasted less than a generation and Sui lasted less than two generation. The total period of unification was 786 years. As discussed earlier in this thread, Tang was convulsed by a catastrophic civil war in 755 and the dynasty never really recovered central control of all China.

    That is a relatively brief unification for a country where history reaches back to 2070 BCE. Perhaps world empires are not a worry because they are inherently unstable.

  8. Nicholas Gruen says:

    The very sad case of the great maths teacher who gave up, being complianced and complianced to submission

  9. Conrad says:

    One of the odd things about these sorts of stories is that even the top maths teachers conflate crazy rules (which I can’t blame them for hating — I would) with the overall decline in mathematics. You might hope people that think about maths all day might think about the data and causation a bit better. In this case, if you look at maths scores across countries, there has been a decline across most Western countries. So unless they all experienced this or a similar set of rules, it is unlikely to be the main contributor.

    Similarly, we constantly hear stories about teaching quality and how it is all important, but there are large differences in achievement across countries across all the main areas of education and I find it hard to imagine that, for example, teachers in say, Turkey, are somehow massively less motivated or worse than teachers in, say, Australia. So the effect size caused by differences in teacher quality cannot predict these differences. This should be one of the big stories of the PISA data.

    So there are certainly crappy things going on, but the bigger effects are not explained by these types of anecdotes.

  10. John Quiggin says:

    Good points, but important to note this is basically a UK/Aust/NZ story. US was never as laid back as we were in the 1950s, and hasn’t gone nearly as far down the managerialist route since then. European universities had different problems – state bureaucracy etc.

    More broadly, the replacement of inefficient but effective professionalism with efficient but ineffective managerialism is the whole story of New Public Management. Even more broadly, of neoliberalism in general.

    • Nicholas Gruen says:

      It’s a fair cop!

    • JOHN CHANDLER says:

      The decision to make universities bigger and for “everyone” in the 1980s (Dawkins) was the beginning of the end. While universities were for relatively small parts of the population, professionalism sufficed, however, once the teaching only institutions were amalgamated with the research institutions they became too big and resulted in huge increases in funding. Funding increased the risks for the Federal government and so they started to ask for all types of management interventions to show that these risks were being dealt with by universities. As a result, universities thought that they needed to introduce managerialism to lower these risks and paying people lots of money, to be responsible for these risks, seemed only natural. Only problem is you need really good governance to make sure that the managers are actually managing efficiently and effectively and as we have seen with the banking community, that is pretty hard to do. It is especially hard when you have poor governance structures and ineffectual members making up the numbers on university councils. It is time to disentangle ‘Dawkin’s Bastards’ and reset the system to incorporate highly relevant teaching only institutions (Institutes of Technology – Collages of Advanced Education) that train people for Workplaces 4.0 & 5.0 and re-invent the university model so they focus on research and research training. The net effect will be a huge reduction in the cost of Higher Education, while allowing universities to do what they do best. Brian Schmidt at ANU seems to understand this better than anyone. He wants ANU to remain small and focused on research. As a leader, he wants to be involved and visible. He doesn’t believe in “Big Management” and appears to be trying to wind back managerialism. He is differentiating ANU to be like an old style university. It is about time the others realised that time is against them. Change will come eventually so they need to decide if they are better placed to be Research or Teaching oriented.

      • paul frijters says:

        “Funding increased the risks for the Federal government and so they started to ask for all types of management interventions to show that these risks were being dealt with by universities. As a result, universities thought that they needed to introduce managerialism to lower these risks and paying people lots of money, to be responsible for these risks, seemed only natural.”

        You shouldnt believe that for a second. It was insider takeover allowing the grabbers to award themselves millions in salaries and perks. Had nothing to do with responses to risks, though that has been the excuse of choice for their activities. They run publicly owned businesses using the language of business but the reality of streightforward appropriation of public resources.

        You also shouldnt believe the ANU’s current boss is doing anything substantial to turn the tide there. It is so easy for smart men to be seduced by flattery. Just look at what has happened to IT at the ANU. Or HR. Central admin thrives on big strategic initiatives.

        Things are far worse than you realise.

        • Paul the huge quantitative expansion of unis between 1970 and 1990 did ,more than anything else ,drive the qualitative changes .
          By the time of of Dawkins government was trying to deal with something that was well established.
          Funny thing is given the enormous lobbying , advice and ‘ education ‘ power the Uni sector has ,very few think or speak of the University sector simply as just another large , mostly self serving Big “X” industry .

  11. Ian says:

    thanks Nicholas, all. I confess I do find some perverse pleasure in watching smart and motivated academics game dumb metrics. After all, for every measurement scheme there is a measurement scam. But, as I think Paul’s comment highlights, is it scamming when all the university ‘status seeking’ managers are effectively encouraging you to do it? And those independent arbiters who compile these metrics want you to care about your rankings as well.

    In terms of what to do about it, I’ve wondered why some of the non Go8 universities here in Australia don’t invest some time and money in getting suitable researchers (with expertise in journal website crawlers, natural language processing, data analytics, statistics, etc) to unpack these metrics games using all the on-line journal and researcher data available.

    These researchers could create some new, even more illuminating, metrics themselves (eg. a PubClub rating for journals which indicates how many of the papers in it have an author who is also on the Editorial Board). Or perhaps Citation club metrics for researchers if they have a couple of non-coauthor academics with whom they always seem to be swapping citations.

    After all, if there are some Universities that just can’t win at this game, why not highlight that its just a game, albeit one with fairly perverse outcomes for the folks paying for all of this. And offer students and partners (industry, government, NGOs, the community) something else – a focus on what really matters. Of course, the gaming of the various impact measures is also now a thing. I give up.

  12. Pingback: SATURDAY’s GOOD READING AND LISTENING FOR THE WEEKEND | John Menadue – Pearls and Irritations

  13. Nicholas Gruen says:

    Research Registries: Facts, Myths, and Possible Improvements
    Eliot Abrams, Jonathan Libgober, and John A. List #27250

    Abstract:
    The past few decades have ushered in an experimental revolution in economics whereby scholars are now much more likely to generate their own data. While there are virtues associated with this movement, there are concomitant difficulties. Several scientific disciplines, including economics, have launched research registries in an effort to attenuate key inferential issues. This study assesses registries both empirically and theoretically, with a special focus on the AEA registry. We find that over 90% of randomized control trials (RCTs) in economics do not register, only 50% of the RCTs that register do so before the intervention begins, and the majority of these preregistrations are not detailed enough to significantly aid inference. Our empirical analysis further shows that using other scientific registries as aspirational examples is misguided, as their perceived success in tackling the main issues is largely a myth. In light of these facts, we advance a simple economic mo! del to explore potential improvements. A key insight from the model is that removal of the (current) option to register completed RCTs could increase the fraction of trials that register. We also argue that linking IRB applications to registrations could further increase registry effectiveness.

Leave a Reply to paul frijters Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.