Foreword: I discovered this post – which I’d entirely forgotten about – the other day. It’s a cracker, and because I wrote a comment on it, it’s received some further comments on account of turning up in the ‘recently commented on threads’ list. So I’m sticking it on the front page for a few days to give it another go. NG
Brad Delong and other worthies occasionally hoist some conspicuously worthwhile comment from comments threads to make their own post. And who says such comments have to come from one’s own blog. Well this was the nearest thing to it. The comment is from Steve Randy Waldman’s blog Interfluidity and is a comment on a very recent post of Steve’s that was the inspiration for a post of mine last night. Anyway it’s a great comment. It’s about our ignorance and what to do about it.
Indy writes:
In a little follow-up to my comment on the problems of non-evaluable industries, I thought I’d share some insights from my own extremely bipolar (in terms of valuation) industry – the military.
At the lowest tactical level – good or bad decisions have immediate and stark life and death results, and there are causal connections between inputs and outputs that are obvious even to laypersons. Following training exercises we conduct post-mortems, “after action reviews”, and even the most junior enlisted soldier can often explain what went right and wrong because of the closeness-in-time and clear connection between cause and effect. In fact, American-style intense military training would be futile and impossible if they couldn’t understand these things. Thankfully, most can, something that is ensured through the testing that is part of the initial entry selection process.
On the other hand, at the highest strategic levels where decisions about the allocation of significant present-day resources are being made about managing international relationships and developing innovative capabilities that may or may bear fruit decades hence in terms of competitive-edge superiority, almost no one, not even the experts, can actually tell what constitutes a good decision.
You often hear about all the horrible waste and “dead-end projects” in the military, but the truth is, there’s almost no other effective way to deal with this long-term “non-evaluable” problem except through the shot-gun approach, which is to (most likely) over-allocate, and throw as much as you can afford of everything you can against the wall to see what sticks. What sticks is the seed for a new round of randomized variation in a process that resembles natural selection or genetic programming. Just like with biological life, evolution through mass waste, pain, and failure is, tragically, the only effective survival strategy in an overwhelmingly complex, hostile, and competitive environment of scarce resources. Ask your local investment banker.
It’s not much different from the world of long-term innovation researching corporations – the capacity of different nations (or companies) to afford the largest number of projects, and absorb the losses of the inevitable large numbers of dead-ends, can often determine who wins in the long run. Smaller players usually have to just sit this state-of-the-art game out, and free-ride in terms of catch-up development when their intelligence services eventually discover our new successful systems. Our edge costs one thousand times more than their copying, and usually only lasts a few years, but there’s no way around that.
But now let me get back to that concept of “actionable intelligence” because it’s just so fundamental to this discussion. Commanders have a certain freedom of maneuver and various possible courses of actions from which they must choose. The Intelligence collectors and analysts often have insanely gigantic amounts of the wrong information and never quite enough of the right information – the kind that makes it clear what to do right now. Think “Federal Reserve”.
All modern knowledge-research-analysis fields (of which Military Intelligence is one), have to come to terms with their peculiar limits of knowledge-acquisition. If you cannot test reality through market success, controlled experiments, or randomized clinical trials or other “gold-standard” investigations (for whatever reason; ethics, money, feasibility, etc…), then the pressure is to settle for silver and bronze and lesser-standard work that you *can* do. In Intelligence (and to a certain extent, in finance), it’s even worse, because you have a smart enemy, motivated because his very life is at stake in his efforts to try and conceal this information and even completely mislead you in a false direction.
The corrupting psychological temptation, however, is the tendency of researchers to unconsciously upgrade the value of the information they can produce because “it’s the best we can do” and it also reflects on the status and influence and reputation of your chosen field. Much “causal density” Social Science operates under these unjustified knowledge-acquisition-standard upgrade assumptions.
What you end up with are very weak relationships and correlations that only very slightly narrow down the enormous range of possibilities from which to choose. Knowing that your target lived in the Western half of a city of 400,000 last month is hardly better than knowing nothing at all if the only information that can be useful to you in terms of moving assets is knowing which block he’s in right now. The Intelligence Officer is often able to hand the Commander as much of this low-value information as he can possible stand, but even in sum it’s very rarely “actionable”.
But Commanders *want* to decide and act, an impulse and will to power instead of passive helplessness. It’s in their nature. And the urge is that “We can only make best decision we can with the information we have, and the information we have may be junk, but it’s the best we can do, so let’s use it.” The problem with this type of thinking, which seems reasonable on its surface, is that it creates a pretense of knowledge – a sense that one’s decision is justified when, in fact, it is not – and the pretense of knowledge leads to worse decisions than the admission and acceptance of one’s own ignorance, or even the pretense of ignorance.
It is now well known that Commanders will make better decisions not by desperate attempts to use the limited low-value knowledge they have, but by filtering and even disregarding all low-value knowledge as being essentially worthless and assuming instead that they have none. When you assume and accept your own ignorance in a scenario of great uncertainty, you shift your focus from forward movement to security – from concentrating on engaging in future risks, to concentrating on discovering and shoring up the vulnerabilities in one’s defenses against an unknown surprise attack. You seek to make your systems less brittle and more robust, while at the same time you reallocate your resources from trigger-pulling to information-collecting so that you can acquire the real, useful, high-value information “actionable intelligence” that you really need to make progress.
It’s no wonder then that when the feedback-loop between operations and intelligence was broken in the months after the initial invasion (again, because we were lulled into complacency), we floundered in Iraq, and when General Petraeus painstakingly reestablished it (at high cost), the situation improved quickly and dramatically. As usual, in practically all the media stories, there was almost no account of the status of this critical element, and hence no accurate narrative of the reasons behind what was really going on. Neither the war’s boosters nor its detractors had any idea why things were progressing the way they were, and this is most evident in that both the initial failure and later success were almost complete surprises to everybody except the Soldiers actually losing, but then winning, the war. Soldiers don’t get much press though.
I hope by context you can see how relevant this is to the whole “nonevaluable” state-alligned institution problem, in Health, Education, Social Policy, Finance, Government, (Macroeconomic?) etc… We have these fields that have such high “causal density” as to make even the best knowledge we have (or even, that we can have) not actually rising to the level of “actionable intelligence” of the kind that is useful to human decision makers. What we get, instead, is the unjustified upgrading of the knowledge we do have, the metrics fetish, and the overconfidence that comes from the pretense of knowledge.
When things are going well, we are lulled into the complacency of our own pretense at our mastery over reality, and we are tempted to take greater risks and “shoot for the moon”. But the pretense of knowledge is worse than the assumption of ignorance if we don’t make our systems robust to downside surprises.
In our inaccessible, non-evaluable institutions, if we admit (or even just assume) we have (or can have) no more good knowledge or ideas (“actionable intelligence”) for how to change practices to achieve much better results with our scarce resources, then we should probably become paranoid about waste and malinvestment, and misallocation, and also shifting our focus from progress to security – to defending against those things that could undermine our current level of success, or collapse the whole system.
If we are to build meta-non-evaluable institution, then that’s probably what they should do – as a kind of rationally-paranoid “national security veto” over incautious policy. Sometimes, it’s good to force the elite leaders of society’s institutions to behave in a manner as if the fields they lead are not, in fact, expert and invulnerable, but actually ignorant and fragile. The Dinosaurs were expert and invulnerable in their world – but in life, everything is ignorant and fragile when things change. And things always change. They taught me that in the Army.
[…] much more interesting things to say than I’ve said. Nicholas Gruen @ Club Troppo has made a full post of an excellent comment by […]
[…] Club Troppo have grabbed a fabulous comment from under a post on “Interfluidity” and turned it into a blog post. Here’s clips of that comment. I can tell I’ve got some reading to do… It is now well known that Commanders will make better decisions not by desperate attempts to use the limited low-value knowledge they have, but by filtering and even disregarding all low-value knowledge as being essentially worthless and assuming instead that they have none. When you assume and accept your own ignorance in a scenario of great uncertainty, you shift your focus from forward movement to security – from concentrating on engaging in future risks, to concentrating on discovering and shoring up the vulnerabilities in one’s defenses against an unknown surprise attack. You seek to make your systems less brittle and more robust, while at the same time you reallocate your resources from trigger-pulling to information-collecting so that you can acquire the real, useful, high-value information “actionable intelligence” that you really need to make progress. […]
I’m glad you liked it. I would add that the reason the U.S. Army works well even as a government institution is because of the same incentives that make the free-market economy work well (in those situations when it works well, anyway). Here are those conditions:
1. Frequent Reality Testing: Many government agencies and state-aligned non-evaluable institutions live, for better or worse, in an environment that is largely immune or buffered from suffering the slings and arrows of being held accountable for success or failure in the real world. When things go wrong, it’s never anyone’s fault, and no one feels any pain. This inevitably causes their activity and “praxis” to drift from producing the positive results they promise. Worse, it bends that praxis to the service of the desires of the bureaucracy itself.
In the market, there is no substitute for sales, revenues, and cash-flow (the “reality test”) – no matter how complex your industry. Profit or die. In the non-evaluable public sector, often, there is no reality-test even possible. You often just have to take their word for it. Does State do a good job with diplomacy? Who can tell?! In the Army, well, war is real and people die, that keeps us on our toes and ensures that doctrine corresponds to reality, not the fantasies or whims of idiosyncratic leaders. Notice: even when Social Scientists try to develop new evaluation metrics, the public sector resists its implementation with every fiber of their being.
The frequent deployment of U.S. forces in the last century has been both a curse and a blessing. Only an Army that is regularly “exercised” and even sometimes fails, can avoid the resting-on-its-laurels syndrome of stagnation and sclerosis. Actually, occasional failure is, tragically, just indispensable – it’s the only thing painful enough to force people to admit they were wrong and to learn and change. If you don’t keep learning new tricks, you’ll become an old dog.
2. Competition: The most insulated non-evaluable institutions are those that operate under market-dominance or quasi-monopoly / cartel / oligopoly / state-sponsorship conditions. That sad story’s been told many times. The Army seems to be the ultimate natural monopoly, but of course it’s not at all. We have this wonderful inspiration called The Enemy, who is trying his best to kill and defeat us. Now that’s competition! Not to mention The Marines (and our SF have the Navy Seals), who have a near-identical role but are allowed (and large enough) to develop a different set of tactics, organization, and equipment. Not just allowed, but the culture makes these groups desperate to distinguish themselves and one-up each other. Notice: the public-sector lobbies hard to ensure they maintain their tight-grip on monopoly should a proposal for privatization or competitive-sourcing ever arise.
3. Innovation Incentive: The competition with the enemy means that each side is constantly trying to develop that “edge”, which means experimenting and executing new equipment and tactics. The arms-race. We have the Armored Personnel Carrier, they build the IED, we build the IED-jammer, etc. Asymmetry in opponents (convention Army vs. insurgents, say) actually makes this work better. With symmetric opponents who are trying to accomplish the same thing, you get a lot of outright copying or follow-the-leader pricing, which encourages the establishment of a passive de-facto cartel system. It would do the insurgent no good at all to have a tank. It would do the Army no good at all to start laying ambush explosives in civilian areas.
4. The Amenability to Assessment: The nature of (tactical) military activity is that is lends itself to knowledge-acquisition. It is usually clear quickly what works and what doesn’t, what one’s strengths and weaknesses are, what has become obsolete, where one needs to go back to the drawing-board, etc… The close and timely relationship between cause-and-effect makes evaluation and accountability possible. If leaders can’t distinguish between what made a victory go well, and what caused a defeat, then they can’t know how to pick courses of action, or how to train their men. Fortunately, we can know.
Here’s an example, let’s say you are leading a Battalion in the Kunar valley in Afghanistan, and you have to decide how many posts you can man with your limited numbers. If you choose too few posts, you won’t have enough coverage to keep the Taliban at bay. If you choose too many, each post will be minimally manned, and any major attack on a small post will require a call for rapid reinforcement. If you’ve been told that you can always have Close Air Support available as part of that reinforcement at any location in the valley within 20 minutes, then you’ll be willing to take more risks. When it turns out that the CAS took 45 minutes to show up in the last attack, *even if your men won that battle handily*, the nature of the game has allowed you to make corrections.
The world of combat is such that knowledge of its nature is accessible to investigation, and one can even adapt based on victories. It’s
very complex, but it’s not inherently *too* complicated to even be understood by anyone. So on the one hand, it’s something you really can learn from, and on the other hand, commanders are encouraged to presume their own ignorance which allows them to manage a good fluid equilibrium between initiative and security, and between risk-taking and force-protection. Compare this to the work of the “Ratings Agencies” and the Macroeconomics researchers at the Fed. Even the best experts could not, and still cannot, really reliably distinguish what is safe or risky, and what is solvent from what is bankrupt. And what have we really learned now that we know that? What have we really changed? Not much. Maybe we can’t. Maybe trusting experts is a mistake in fields where there cannot really be much true expertise.
5. No Bailouts: There’s no moral hazard because there’s no “Too Big To Fail” because there’s no bigger and stronger ally to bail out the U.S. Army. The government cannot paper over a defeat-in-progress by throwing huge amount of liquidity and guarantees at the problem. The whole TBTF fiasco is also a sad story which is still being told. Exactly this kind of moral hazard, by the way, is very much in evidence in the militaries of the rest of the non-Anglosphere developed world countries that have, through their perception that they can free-ride under the US’s umbrella, chosen butter over guns. Our begrudged role as default global “ordnungmacht”, however, may coming to an end within our lifetimes. Our declining percentage of total global GDP and population cannot keep up with the magnitude of the task forever, but in the mean time, that lack of bailouts serves it’s behavior-guiding function well.
In other words – The Army is one of the few remaining USGOV institutions that, by it’s very nature, (and depending on the mission and the resources the politician civilians give it) works very well for all the same reasons that good companies work well. And bad companies fail for the same reason that non-evaluable public institutions fail. The same thing holds true, more or less, in a lot of other countries.
One of the reasons you see so many military dictatorships like the one that is now still in charge of Egypt – is because it’s never healthy to have the only truly functional institution in your society also be the one that controls all the men with guns (and, in Egypt’s case, also control most of the economy). They’ll turn their competency in fighting into a competency on keeping an iron grip on power – better to keep them focused on defense. The even better bet is to have a lot of small, mutually-competitive, independent institutions which are also healthy and evolving because of the work of the painful but positive pressures I’ve mentioned above.
Our non-evaluable state-aligned sectors are very, *VERY* bad at accepting this state of affairs because it undermines their ability to exploit their inaccessibility and thus live securely in their rent-collecting manner. They will fight any reforms that press in this direction as hard as they can because that is human nature. But we simply can’t afford to commit endless blind-faith black-checks to institutions we don’t know how to evaluate and improve – where we don’t have “actionable intelligence”. The only thing that works is rivalry.
I really like this essay, but based on a long view of history, I don’t think the US Army does work. And I think it fails for exactly the reasons given here.
The US Army used to work. Back when it worked, there was a ruthless level of shotgun-approach — Lincoln just kept appointing generals, and whenever one got stuck, he’d replace him! George Marshall did exactly the same thing. They actually *dissolved the army* between wars, and when they reformed it for each war, they just tried new people and new strategies. Either they worked or they didn’t and they moved on if they didn’t.
Since the end of WWII, we’ve had a career military, and this has prevented this from happening. The careerist officers are more interested in protecting their own reputations than in, well, winning, and they aren’t rotated out with the ruthlessness with which they were under Marshall or Lincoln.
And, bluntly, the US Army basically does get bailouts. Why? Because losing one of these foreign wars doesn’t prevent the Army from continuing to do the same stupid shit which caused it to lose the war.
It’s different when the war is on your home turf. Then, losing *matters*.
Add indigenous policy to that list.
as someone who spent some years in this profession I endorse this whole-heartedly. I just wish someone had given me this when I started out.
And yes, applies across lots of areas – anywhere where active, intelligent people are working against one another.
“I would add that the reason the U.S. Army works well even as a government institution is because of the same incentives that make the free-market economy work well”
I imagine another reason you seemed to have missed is that it gets mind boggling amounts of funding. Of course, it’s always possible to waste mind boggling amounts, but I would think the standard of success should be exceptionally high if you do get such amounts.
It’s worthwhile noting that the Soviets also once had a pretty decent army too (enough to scare the West anyway), and that’s because they spent mind boggling amounts on them also (enough to go broke, some would argue). Now that they don’t spend so much, their army isn’t nearly as good, even though the incentives to have a good army haven’t changed too much (or at least they have other people to fight with, e.g., Chechnya). The opposite example comes from China. Their army wasn’t especially good until more recently, but now that they start spending lots on it, it suddenly gets better at everything. Again, this isn’t because incentives have changed, it’s just that the amount of money they spend on it has increased.
The terminology is very different, but fundamentally a large part of the discipline of Knowledge Management is about grappling with this space.
The ideas of complexity, resilience, distributed and disintermediated cognition — it’s all very exciting and challenging but still very much a young discipline.
I’ve also thought that indigenous policy would be a prime candidate for using this approach — top-down mandates and strategies will never work.
Thanks for reposting Indy’s comment, it’s very thought-provoking, as is the follow up. In the absence of regular and non-ambiguous feedback it’s very easy to end up in what I call “the smugosphere”. I agree very much with Indy on this. What I found interesting was recently reading – in Malcolm Gladwell’s “Blink” – about how when the US military was doing one of its (desktop) war-gaming, the “insurgents” kept winning, until they were forbidding from doing the sorts of things that, um, insurgents do. So, sometimes for the military there is feedback (though of course, Westmoreland was pretty sure he’d “win” in Vietnam with just another 500,000 troops), sometimes there’s not.
interesting, but ultimately unconvincing when transported outside the area of the army. Where the rabbit goes into the hat is the presumption that a magic crystal ball can tell you how to do the ‘safe thing’. What is ‘safe’ health policy, management, or even army tactics? Without the leap of faith that some mythical stable status quo can knowlingly be maintained and is less dangerous than any of the unknown alternatives, the argument reduces to saying more information would be nice in those situations where information is important.
In the absence of having a crystal ball for knowing what to do in any direction, I dont see much alternative to muddling on. Also, I dont believe for a second that the army is now a haven for a fairly philosophical stance on truth and action. I have the strong feeling I am being fed a make-believe story about how good the army now is.
Paul,
IMHO your comment is built around a non-sequitur. Either the principles which make sense in the army make sense outside the army where similar circumstances exist (which are radical uncertainty and an environment of threat) or they don’t.
Can you explain why such decision making protocols make sense in the army but not elsewhere? Note: when I read the comment I didn’t see any appeals to rabbits in hats or anything analogous. I saw a deliberate attempt to deal with the psychology of decision making as possibly the most important aspect of decision making.
Nick,
the offending bits of text are these ones:
and then there is “robust to downside surprises” and “make your systems less brittle and more robust”
these bits of text for me are a rabbit-in-the-hat. Where did the knowledge come from that we someonehow know how to make systems robust, unbrittle, prepared for ‘unknown surprises’? When I think of almost any application, if we had to know how to prepare for unknown suprises, we’d know pretty much have to know everything. It basically smacks of verbal trickery to me (I blindside you by putting up a good argument for why X is not really possible, then claiming that we hence should do Y without ever making a coherent argument for Y)..
Apart from these displays of blind-faith conservative, its a fine piece.
Fair enough – I read the ‘offending’ passage as psychological advice (though concede it’s not expressed that way). It occurred to me as I mulled this over today that the comment that heads the post is really about the psychology of decision making. Great decision makers usually have a concept of their own psychological vulnerabilities. This is true of quite a few famous investors I can think of – Graham, Buffett and Soros. They think about their own psychological foibles and try to counteract them. I’d like to see this applied to the decisions that economic policy makers make. Right now, I’d like to see more thinking of the kind that Indy proposes in the ‘offending’ passage around the way Australia navigates its current terms of trade bonanza. The Treasury may be right, but every terms of trade boom so far has led to a terms of trade crash. Perhaps this one won’t for a couple of decades, but I’d like to see more thinking done about ‘shoring up vulnerabilities’ if we’re wrong.
But you don’t get taught any of that kind of stuff (about being alert to and trying to cover for your own psychological weaknesses) in economics.
Paul, I can understand your misgivings, but I read the piece thinking of climate change, and I think that prism clarifies this aspect a little.
We can make all sorts of predictions about what the effects of climate change might be. These are probably useful, in ensemble, to get some broad-scale gauge on how much of a threat it is (akin to “gaming” in a military context, I guess). But the individual pieces of information (eg whether rainfall is going to go up or down in your particular region) are almost certainly junk information.
Acknowledging the junk status of these specific, individual predictions makes a huge difference to your response. Acting on the individual predictions means, for example, upgrading stormwater infrastructure ahead of time to prevent disaster. And probably getting it wrong, either by spending money you didn’t need to (over-investment), or not doing enough of an upgrade (under-investment). Assuming that the specific information is essentially junk and unknowable means diverting that effort to (a) pulling back on the forces driving this uncertainty (emissions mitigation) and (b) making the system less brittle – eg investing in technologies (and equipment?) that help us roll out new stormwater infrastructure quicker and at lower cost.