Foreword: I discovered this post – which I’d entirely forgotten about – the other day. It’s a cracker, and because I wrote a comment on it, it’s received some further comments on account of turning up in the ‘recently commented on threads’ list. So I’m sticking it on the front page for a few days to give it another go. NG
Brad Delong and other worthies occasionally hoist some conspicuously worthwhile comment from comments threads to make their own post. And who says such comments have to come from one’s own blog. Well this was the nearest thing to it. The comment is from Steve Randy Waldman’s blog Interfluidity and is a comment on a very recent post of Steve’s that was the inspiration for a post of mine last night. Anyway it’s a great comment. It’s about our ignorance and what to do about it.
In a little follow-up to my comment on the problems of non-evaluable industries, I thought I’d share some insights from my own extremely bipolar (in terms of valuation) industry – the military.
At the lowest tactical level – good or bad decisions have immediate and stark life and death results, and there are causal connections between inputs and outputs that are obvious even to laypersons. Following training exercises we conduct post-mortems, “after action reviews”, and even the most junior enlisted soldier can often explain what went right and wrong because of the closeness-in-time and clear connection between cause and effect. In fact, American-style intense military training would be futile and impossible if they couldn’t understand these things. Thankfully, most can, something that is ensured through the testing that is part of the initial entry selection process.
On the other hand, at the highest strategic levels where decisions about the allocation of significant present-day resources are being made about managing international relationships and developing innovative capabilities that may or may bear fruit decades hence in terms of competitive-edge superiority, almost no one, not even the experts, can actually tell what constitutes a good decision.
You often hear about all the horrible waste and “dead-end projects” in the military, but the truth is, there’s almost no other effective way to deal with this long-term “non-evaluable” problem except through the shot-gun approach, which is to (most likely) over-allocate, and throw as much as you can afford of everything you can against the wall to see what sticks. What sticks is the seed for a new round of randomized variation in a process that resembles natural selection or genetic programming. Just like with biological life, evolution through mass waste, pain, and failure is, tragically, the only effective survival strategy in an overwhelmingly complex, hostile, and competitive environment of scarce resources. Ask your local investment banker.
It’s not much different from the world of long-term innovation researching corporations – the capacity of different nations (or companies) to afford the largest number of projects, and absorb the losses of the inevitable large numbers of dead-ends, can often determine who wins in the long run. Smaller players usually have to just sit this state-of-the-art game out, and free-ride in terms of catch-up development when their intelligence services eventually discover our new successful systems. Our edge costs one thousand times more than their copying, and usually only lasts a few years, but there’s no way around that.
But now let me get back to that concept of “actionable intelligence” because it’s just so fundamental to this discussion. Commanders have a certain freedom of maneuver and various possible courses of actions from which they must choose. The Intelligence collectors and analysts often have insanely gigantic amounts of the wrong information and never quite enough of the right information – the kind that makes it clear what to do right now. Think “Federal Reserve”.
All modern knowledge-research-analysis fields (of which Military Intelligence is one), have to come to terms with their peculiar limits of knowledge-acquisition. If you cannot test reality through market success, controlled experiments, or randomized clinical trials or other “gold-standard” investigations (for whatever reason; ethics, money, feasibility, etc…), then the pressure is to settle for silver and bronze and lesser-standard work that you *can* do. In Intelligence (and to a certain extent, in finance), it’s even worse, because you have a smart enemy, motivated because his very life is at stake in his efforts to try and conceal this information and even completely mislead you in a false direction.
The corrupting psychological temptation, however, is the tendency of researchers to unconsciously upgrade the value of the information they can produce because “it’s the best we can do” and it also reflects on the status and influence and reputation of your chosen field. Much “causal density” Social Science operates under these unjustified knowledge-acquisition-standard upgrade assumptions.
What you end up with are very weak relationships and correlations that only very slightly narrow down the enormous range of possibilities from which to choose. Knowing that your target lived in the Western half of a city of 400,000 last month is hardly better than knowing nothing at all if the only information that can be useful to you in terms of moving assets is knowing which block he’s in right now. The Intelligence Officer is often able to hand the Commander as much of this low-value information as he can possible stand, but even in sum it’s very rarely “actionable”.
But Commanders *want* to decide and act, an impulse and will to power instead of passive helplessness. It’s in their nature. And the urge is that “We can only make best decision we can with the information we have, and the information we have may be junk, but it’s the best we can do, so let’s use it.” The problem with this type of thinking, which seems reasonable on its surface, is that it creates a pretense of knowledge – a sense that one’s decision is justified when, in fact, it is not – and the pretense of knowledge leads to worse decisions than the admission and acceptance of one’s own ignorance, or even the pretense of ignorance.
It is now well known that Commanders will make better decisions not by desperate attempts to use the limited low-value knowledge they have, but by filtering and even disregarding all low-value knowledge as being essentially worthless and assuming instead that they have none. When you assume and accept your own ignorance in a scenario of great uncertainty, you shift your focus from forward movement to security – from concentrating on engaging in future risks, to concentrating on discovering and shoring up the vulnerabilities in one’s defenses against an unknown surprise attack. You seek to make your systems less brittle and more robust, while at the same time you reallocate your resources from trigger-pulling to information-collecting so that you can acquire the real, useful, high-value information “actionable intelligence” that you really need to make progress.
It’s no wonder then that when the feedback-loop between operations and intelligence was broken in the months after the initial invasion (again, because we were lulled into complacency), we floundered in Iraq, and when General Petraeus painstakingly reestablished it (at high cost), the situation improved quickly and dramatically. As usual, in practically all the media stories, there was almost no account of the status of this critical element, and hence no accurate narrative of the reasons behind what was really going on. Neither the war’s boosters nor its detractors had any idea why things were progressing the way they were, and this is most evident in that both the initial failure and later success were almost complete surprises to everybody except the Soldiers actually losing, but then winning, the war. Soldiers don’t get much press though.
I hope by context you can see how relevant this is to the whole “nonevaluable” state-alligned institution problem, in Health, Education, Social Policy, Finance, Government, (Macroeconomic?) etc… We have these fields that have such high “causal density” as to make even the best knowledge we have (or even, that we can have) not actually rising to the level of “actionable intelligence” of the kind that is useful to human decision makers. What we get, instead, is the unjustified upgrading of the knowledge we do have, the metrics fetish, and the overconfidence that comes from the pretense of knowledge.
When things are going well, we are lulled into the complacency of our own pretense at our mastery over reality, and we are tempted to take greater risks and “shoot for the moon”. But the pretense of knowledge is worse than the assumption of ignorance if we don’t make our systems robust to downside surprises.
In our inaccessible, non-evaluable institutions, if we admit (or even just assume) we have (or can have) no more good knowledge or ideas (“actionable intelligence”) for how to change practices to achieve much better results with our scarce resources, then we should probably become paranoid about waste and malinvestment, and misallocation, and also shifting our focus from progress to security – to defending against those things that could undermine our current level of success, or collapse the whole system.
If we are to build meta-non-evaluable institution, then that’s probably what they should do – as a kind of rationally-paranoid “national security veto” over incautious policy. Sometimes, it’s good to force the elite leaders of society’s institutions to behave in a manner as if the fields they lead are not, in fact, expert and invulnerable, but actually ignorant and fragile. The Dinosaurs were expert and invulnerable in their world – but in life, everything is ignorant and fragile when things change. And things always change. They taught me that in the Army.