The master, his emissary and the balance of risk

Is this a bunch of black patches on a white background? It is. Of course it is. (Remember you’re at Troppo now. No mucking around.) It also depicts something which you can’t unsee once you’ve seen it. Such is the power of perspective-taking. Now get back to reading the post and stop slacking off on the pictures.

The performance of expertise is tangled up in status displays. Often that subtly displaces what should be the true object of inquiry. Thus, for instance, economists will often be drawn off into spinning their view of a future which A. G. L. Shackle engagingly called “kaleidic”. As I’ve argued, they should, instead, be focused, as weather forecasters are, on understanding how much they know – which in economic forecasting would actually involve understanding how little they know.

Further:

  1. Without confidence intervals around the forecasts, they could do more harm than good and
  2. Forecasts about the major risks to the economy would probably be more useful than ‘point forecasts’ like “the economy will grow by 2.5 percent in the next financial year. They should be issued in a probabilistic form such as “We estimate the chances of recession in the next 6 months has risen from 10 to 20%”.

At the highest level of generality, these problems can be thought of in terms of Nietzsche’s story of the Master and his Emissary. In the story, the Master of a great kingdom can only run his empire by sending emissaries out to govern provinces. The emissary is a competent fellow, but the competence he’s shown his master is tightly defined in some domain – say accounting and running committees. When the emissary usurps his master, his part of the kingdom declines because he lacks wisdom. You know – the wisdom that the master has – masters are like that.

In McGilchrist’s telling of the story 1 the master is the ‘right-brain’ which is the ‘big picture’ thinker. The left brain is the special ops division – doing special tasks – learning how to put topspin on the ball if you’re a one-handed backhand in tennis, learning accounting, building an epidemiological model, making sure people follow procedures, making sure academics hit their publication KPIs.

One of the most tangible functions of the right brain on the African savannah was scanning for predators. It’s the right-brain’s job to frame a problem before and while the left brain helps analyse it with logic and models and any other tools it’s developed, and sends back messages to the master and awaits further instructions.

But here’s the thing, we don’t teach right-brain skills at uni. We assume them. When you go to uni you learn left-brain stuff. We have whole disciplines dominated by left-brain toolmaking and tool-using primates. We give some of them Nobel Prizes. Unfortunately, some boosters of the humanities regard ‘the humanities’ as the solution to these problems. So long as they’ve not been colonised by ideology or other nonsense, these disciplines are all very well, but they’re not focused particularly on the kind of role I’m arguing we’re so desperately short of here, which is the role of the right brain in applying the tools of the left brain.2

The pandemic is providing plenty of teachable moments which show what a massive toll this bias is taking on human wellbeing right now. Nassim Nicholas Taleb and Yaneer Bar-Yam take up the story in this excellent piece:[Though they have what looks like a completely gratuitous go at Dominic Cummings in the middle of it which, as far as one can tell, isn’t based on any relevant insider evidence whatever.]

The error in the UK is on two levels. Modelling and policymaking.

First, at the modelling level, the government relied at all stages on epidemiological models that were designed to show us roughly what happens when a preselected set of actions are made, and not what we should make happen, and how.

The modellers use hypotheses/assumptions, which they then feed into models, and use to draw conclusions and make policy recommendations. Critically, they do not produce an error rate. What if these assumptions are wrong? Have they been tested? The answer is often no. For academic papers, this is fine. Flawed theories can provoke discussion. Risk management – like wisdom – requires robustness in models.

Note something here which I highlighted in a recent post – the God’s eye view. The epidemiologists have a model that aspires to a God’s eye view. It, or models like it, should certainly be part of a sensible response. But

  1. Our use of the models should be guided by a wider understanding of how to use them – what blind spots they have and therefore what we should both pay attention to in their outputs and what questions we should do further research on to try to refine our understanding and hone it to the critical questions we want answered.
  2. Our thinking shouldn’t be seduced by the God’s eye view. We’re not Gods. We’re little humangoes and we are trying to figure out what to do from our point of view.  (And ‘our’ point of view might involve our individual interest and/or the interests of those groups of which we consider ourselves a part.) As I suggested here regarding economics, the point of the discipline is to help us answer the question “what should we do?” We may well need to mount specific researches asking specific questions about the way the world is, but the primary motivation should be our need to build tools that can help us manage our world for the better.

At least as far as I understand it – and I may be being unfair in my ignorance – the Imperial College simulations were rolled out as a ‘take’ on the crisis which pointed at what to do. But though what they suggest we do is of deep significance, the other thing of fantastic significance is the role they can play in surfacing precisely what we don’t know. If they came with a list of crucial assumptions that might be wrong – and I don’t know if they did – was that list prominent in the executive summary of the document?

Off the top of my head, some of those assumptions were

  • the length of hospital stays indicated
  • the number of stays indicated over time
  • the interaction with the capacity of the system
  • the extent to which capacity could be augmented
  • the viability of the ‘hammer and dance’ strategy
  • the relevant economic cost of different options
  • the R0 of the virus in different populations – particularly children
  • the proportion of people who have the disease who are asymptomatic

And plenty of others. Each of these assumptions could have indicated in appendices what was known and what might be found out, might have come with blog posts for public comment (either by invitation or in some filtered way – for instance, you might require an academic email – far from perfect but we’re in a crisis here.)

There was one monster assumption in the UK, raised to the level of potential national catastrophe, but it’s also been present in Australia and a huge issue I think. That’s the idea that there’s nothing we can do to prevent the virus spreading even after we get the number of cases down, so we should avoid full lockdown and try to get through. Perhaps this assumption is right. It certainly wasn’t with SARS or ebola. But it’s critical. And yet we’ve seen whole press conferences in which our PM and his trusty Chief and Deputy Chief Medical Officers asserted as a simple fact. It’s not a simple fact – it’s the most crucial assumption of our time. If it’s right the government’s strategy is a pretty good one. If it’s wrong it’s probably terrible and certainly the wrong one with foresight – who knows what will turn up in hindsight.

There are more lessons from this than I can elaborate here, but let me finish this post with a concrete suggestion which can function as an example of a different way of approaching our dilemmas – focused on the balance of risks rather than imagining the trick is in identifying and heading towards our preferred future. I recall the Productivity Commission arguing that, where there’s price control of privately supplied infrastructure, policy should not aim for the optimum price as it would come out of a model (the price which the private provider receives just enough return to want to expand capacity when they should).

That doesn’t take into account the risk of getting it wrong. 3 Given this, the price controller should aim to set the price a bit above the optimum level. Why? Because the efficiency costs of having the price too high are small – and indeed vanishingly small if the price is only slightly misaligned. But the costs of having the price too low are high. Over time, underinvestment in the asset can generate high and growing external costs through congestion which can take a long time to turn around.

As we come out of this disaster, governments will be stimulating the economy to return it to its potential. In television studio land the balance of risks is set by body language. That’s how we’ve decided (disastrously as it’s turned out) to cut interest rates very slowly and reluctantly.4 Everyone knows that being more concerned about inflation than unemployment is responsible, adult, indeed masculine behaviour. Erring on the other side is for sissies. I watched it happen in 1991. We did much better in 2008-9. But even there with hindsight, I think we should have gone a little harder.

The ‘optimal aim’ with a stimulus is to land the economy as close as possible to its maximum sustainable growth path. And you don’t know enough to know what that is – and even if you did, you wouldn’t know enough to hit the target precisely. If you’re too unambitious you forego a lot of growth, unemployment is higher than it need be with thousands suffering out of work and young lives blighted and you miss your inflation target. If you are too ambitious inflation goes over the target. And at least if it becomes entrenched we don’t know how to bring it down without a recession.

If you pay close attention to the lie of the land intimated in my last paragraph, it leads me to suggest that, just as the PC suggests ‘aiming high’ we should do the same for our stimulus measures. If they’re too big, we’ll get inflation. We’ve had too little inflation recently and certainly when inflation is low (say below 4 percent), too little is substantially worse than too much – the obverse of the case with prices on private infrastructure. That may give us the task of reigning in inflation after the event, which can be costly. But it will also come with benefits in terms of reducing elevated debt levels after the stimulus. And there are no risk-free paths here. I’d rather have that problem than its obverse – which we’ve had since the official family decided to leave monetary policy where it was – at a 3 per cent overnight cash rate and then cut very slowly and reluctantly, despite its own forecasts of worsening unemployment.

  1.  There’s a great interview with him on Econtalk introducing you to the basic ideas.
  2. Note parenthetically how the arts tried to muscle in on the STEM panic recently. Along they came and – ‘Banksy’ like – stuck an “A” for Arts into STEM making it STEAM. Note that that’s injecting something in an additive way as if thinking that improves the world is just thinking of a whole lot of different kinds of skill all added into the mix without attending to their structural relations.
  3. indeed the certainty of getting it wrong at least to some extent.
  4. People argue that the reluctance of the RBA has been attributable to its ‘leaning against the wind’ on asset prices. This is fine, but if that’s the case then, first, they should release some modelling taking us through the argument and identifying the risks, and second, had they done so the assumptions their current strategy is based on should be being surfaced. A crucial assumption is that cutting reluctantly doesn’t create a greater risk of a bubble because of the ‘one way bet’ it gives the markets who know that it’s likely to be years before rates rise – when the chances of rates rising after more aggressive action are much higher.
This entry was posted in Coronavirus crisis, Economics and public policy. Bookmark the permalink.

23 Responses to The master, his emissary and the balance of risk

  1. paul frijters says:

    hmmm, I agree with the gradual move away from realism in the social sciences, at least when it comes to bigger-picture stuff (in other ways there is too much realism such that we are nearing the pretense that economics is applied neuroscience).
    but
    “The modellers use hypotheses/assumptions, which they then feed into models, and use to draw conclusions and make policy recommendations. Critically, they do not produce an error rate. What if these assumptions are wrong? Have they been tested? ”

    I think is just not true. These epidemiologists are not stupid in that sense. They chide each about uncertainties and evidence all the time.

    More important is that they have a particular territory and find it totally fine not to stray out of it and to presume nothing outside of it matters. So it does not naturally occur to them to think of the economic externality on other countries of shutting down factories. They adopt the attitude of “well, prove this to me and until then I will assume no such negative feedback from my suggestion of imprisoning the population”. Also the effects of panic, mass imprisoning, etc., on mental health and policy pressures is totally outside of their territory and they hence have a disciplinary blindness to them.

    So I dont think to solution to such problems is more awareness of what they dont know. You will never get them to realistically assess all unknown for the simple reason that they cant because that is too hard for any scientist. Asking the impossible just leads to more bs. We should think of processes in which broader expertise, wisdom as you put it, is in the room and can be listened to, communicated to the population.

    • Nicholas Gruen says:

      Indeed – I’m suggesting precisely not that the epidemiologists take into account everything. When it comes to economics there should be some process by which they and the economists get together and thrash out issues.

      But your reference to epidemiologists might be right in the academy, but it does seem to be the case that the UK policy of herd immunity was proposed without much care to identify all the assumptions it was based on other than those fed into the models.

      • paul frijters says:

        i dont know the inside track on the issue of herd immunity, but I sort of doubt that those who proposed it didnt honestly think of the many assumptions on spreads and virus control involved, including their uncertainty. Their blind spots is in what other effects their proposed solutions have, not the nature of the health problem they are looking at. So I actually believe that their initial view of herd immunity probably was on balance the right way to look at it from a medical point of view. They probably lost courage when inundated with critique and veiled threats about how they were taken a gamble, risking lives etc. They were then quite a bit more prone to say how feeding Italian data into their model made them change their mind, which then changed the minds of the politicians.

        Anyhow, that is how the public information looks like on this issue of what happened behind doors. Since this is the UK, I am sure the story will eventually be told openly in several lengthy biographies from different angles, making lots of people look bad. Since Boris is an excellent writer my money is on his version to be the most read and thus authoritative.

  2. desipis says:

    “The modellers use hypotheses/assumptions, which they then feed into models, and use to draw conclusions and make policy recommendations. Critically, they do not produce an error rate. What if these assumptions are wrong? Have they been tested? ”

    Having read through the modelling used to justify keeping the schools open in Australia, I think this is right on point.

    • derrida derider says:

      Na, Paul’s right on this. They will have done an enormous range of many parameters to generate a large and complex statespace, with lots of endogenous variables (mostly generated by the possible policy responses). That, after all, is why they need seriously grunty computers – a simple base case you could do by hand. This is lots more sophisticated and, properly interpreted, useful than attempting simple Gaussian-based confidence intervals – precisely because it has endogenous responses.

      Modelling the welfare losses (ie the economic costs) of the various responses and scenarios is technically FAR harder than modelling the course of the epidemic itself, mainly because it is all in effect massively out of sample. So Paul’s schtick about balancing economic costs of different policies is a lot harder than he thinks.

      Of course as I keep saying here to those who denounce my “formalism” (sounds Pravda-like doesn’t it?) the alternative to tested formal models is not no models but untested, unrigorous, unacknowledged mental models – even less reliable. Treasury and their overseas counterparts are undoubtedly giving advice on economic effects as we speak, but like everyone else’s it will be made up shit because we just can’t know.

  3. David Harris says:

    It would be hard to believe that a virus like the COVID19 will go away. It’s fierce capability for infection means that it will be with humanity – including the developed countries – for ever. This means that, for us to work together, we must have a high level of herd immunity. This can only be acquired by letting most of the population get the disease and recover, with immunity OR by inoculating the majority of the population with a vaccine, which we don’t yet have. Australia is currently spending $130 billion on keeping the economy going, and this has an expected life of around 6 months. Developing a new vaccine, and having it manufactured, distributed and used, will take a long time. Two years at most optimistic, and realistically, a lot more. So the $130B injection will need to be repeated over and over. Can the nation afford this? How will future generations cope with the resulting deficits? We need to find a new regime which does things differently. I’m not sure that current modelling will achieve this.

  4. Alex Coram says:

    Risk and uncertainty have a lot of interesting complications and Nick’s post covers a lot of them. Here are some comments on a few.
    1. Uncertainty.
    Uncertainty is often approached the wrong way. Margins of error are a form of certainty. This is good for things like weather forecasts where the rate of drift over time is pretty well known.
    For higher levels of uncertainty dimensionless models might be used to explore trajectories, relative rates of change, stationary points, bifurcations, sensitivity to parameters and so on. This isn’t numbers, but it is often useful information .
    Accepting uncertainty also puts the emphasis on exploring the balance of upside and downside risk. In a lot of cases it is better to think in max-min terms and err on the side of prevention than try to fine tune. Taleb and Nick say this.

    2. Models.
    The problem isn’t with the models the epidemiological people or Imperial College used. (I assume they were pretty standard – it is a well trodden path).
    The problem is that they gave a lot of epidemiological projections, but the question that was being asked is what should we do about it? Taleb is partly right on this. Simple back of the envelope models would have been alarming and indicated a bit of preparation back in January would have been a good bet. But there is a deeper issue. This is that the mathematics of the question wasn’t understood and the models were being used for the wrong purpose.
    The question, what should we do? is a question in optimal control. If you tried to construct a model you would get a set of differential equations with exponential growth and a set of controls with imperfect information and time lags. In essence it would tell you that the possibility of steering the states to the target set with the required dynamic constraints etc etc is about zero . In short don’t even try. Just use the precautionary early move robust policies that the back of the envelopes indicate.
    I think I am saying that:
    Mathematical models are great when they are properly used,
    Margins of error are a form of certainty and need to be used with caution
    Back of the envelope for almost everything where information is coarse,
    Think about the question and what the mathematics should look like,
    Think about using min-max for coarse information and risk

  5. Jim KABLE says:

    (a) STEM->(b) STEAM->(c) MATES.

    I prefer MATES

  6. Robert Banks says:

    Nicholas

    i haven’t delved into the modelling at all, respecting or assuming the diversity and at least some degree of openness, which should lead to a degree of wisdom of crowds, but having said that, I’d be interested to know whether any groups are using agent-based modelling, and hence are in a position to potentially estimate which are the most critical parameters in the model(s).

    “Flatten the curve” seems a bit like reliance on interests rates to “control” the economy, almost incantatory – flattening the curve is a strategy that is predicted to result from the combined effects of a number of actual tactics, and is an outcome, in turn a means to a greater outcome.

    Its weird when reflected upon that we seem to place so much reliance on interest rates, when the complex adaptive system that is our (or anyone’s) economy has so many interacting forces.

    And today is a good day to suggest that the “god’s-eye” view is by definition not open to any of us – we being agents in the system. Do detached observers (technocrats such construct economic models) include the preferences, actions and frequency of detached observers in their models?

    And perhaps one reason we don’t see much general or widely accessible discussion of the modelling of our economy is that such would require being open about outcomes such as the distribution of risk, poverty etc.

    Re the picture at the head of your article, I’ve been thinking that there is a sort of rabbit in the mid-foreground, slightly turned facing into the future (away from us) very like the figure in a well-known painting I think by Wyeth.

    Anyway, your article is kind of like a chewy treat. Thanks.

    • Nicholas Gruen says:

      The toy simulations on web pages where you watch dots run around in boxes infecting other dots are agent-based modelling. It seems to me that all the probabilistic modelling would have basically the same structure. If you watch one of those they show you how they illustrate the impact of different elements of strategies to get down the infection rate.

      And you’re right about where the figure is, but it isn’t a rabbit!

  7. Robert Banks says:

    Its a dog, with its nose down, walking towards the upper left corner. A medium-sized breed, maybe like a pointer, and it has a collar on, I think.

    This is so much easier than following some of the arguments about economic policy – but then, I may be wrong re the dog. In which case I’ll probably give up, demonstrating deep lack of any persistence.

  8. David Harris says:

    If the dark patch is a pool of water there is a person swimming/drowning in it.

  9. David Harris says:

    I’m not trying to ignore modelling. Dammit, I was a modeller before I retired. But I think that we have something very basic to work on first, and that is herd immunity. All the isolation, distancing and “flattening the curve” can do, in the absence of a vaccine, is reduce the rate at which sick people in Australia use the medical system, hopefully down to a rate at which the system can handle it. It may be possible to eliminate the virus from Australia, if our borders are absolutely bulletproof, but our “normal life” involves jobs in tourism, education, farming etc, all of which require bringing people – lots of them – into the country. If we let down our guard and bring many people in from overseas, COVID19 will certainly return, and we’ll launch into another panic program to try to control its spread. As we don’t know when an effective and safe vaccine will be available, and there is a wide error band in guessing when this could be, it seems that we are missing a fundamental factor in any modelling.

    • David
      Agree, however it will take time.
      Am reminded of something that happened in week four of the roughly nine weeks of the fires around the Braidwood region; somebody rang the Mongarlowe RFS Brigades headquarters and in a bossy tone said “ why haven’t you put the fires out” ( exact word for word quote and almost certainly a Canberran ).

  10. Nicholas Gruen says:

    Another piece in a similar vein

    • Nicholas Gruen says:

      Still, as I wrote to the person who sent it to me:

      Indeed

      But I think the piece you sent is still way too confident about models. In most situations they should be used to manage risks and identify the most urgent things we need to know.

      That that is so in the case of a novel pandemic is obvious – but very few people are saying that – though Taleb is merciless about how bad they’ve been (including some rather gratuitous sniping at Dominic Cummings which might be justified but not for his ramblings about DARPA on his blog). But not even Taleb is saying how the modelling should have been used to guide ‘next steps’ in targeting the gathering of evidence and interpreting it.

  11. David Harris says:

    It will be interesting when the modellers apply their skills to places like Indonesia. Pretty important for us as our near neighbour. It is a big country, with around 240 million people living on around 6,000 inhabited islands. It is poor. It can’t afford a health system or a border control system comparable with Australia’s, and people can’t afford to stop working, because if they do, they starve. It’s hard to see how they have any option but to settle for herd immunity, incurring millions of deaths on the way, but emerging with an immune (and younger) population.

    • Nicholas Gruen says:

      Probably right

      But, if Paul’s observations on other threads are anything to go by, they’ll ape the west. Where Paul’s suspicious of the wisdom of it for the West, it’s going to be collateral class warfare in poorer countries where, as a result of lockdowns far more of their poorer people die than middle class people are protected by the lockdown.

      What a terrible state of affairs. Modelling ought to help bring out the issues there, but not epidemiological modelling which is tightly targeted at the health issues – and then only the health issues relating to the virus – and ignores wider, and in that case vastly more important issues.

  12. Nicholas Gruen says:

    Good piece apropos of these issues here

    Scientists of all stripes should work together to improve public health, and none should mistake a professional tendency or a specialist’s rule of thumb for an unshakable epistemological principle. All should support rigorous evidence gathering, especially for the costliest and most disruptive interventions. And insofar as scientists identify with a philosophical school that predisposes them to write off certain forms of evidence entirely, they should, in short, get over it. Instead we should use every possible source of insight at our disposal to gain knowledge and inform decisions, which are always made under uncertainty—rarely more so than at present.

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.