As Philip Tetlock so powerfully showed, most expertise isn’t worth nix if the criterion of expertise is whether you can demonstrate superior predictions about what will happen in the future. As he showed, most experts can’t predict any better than tolerably informed non-experts and some experts – particularly the hedgehogs who “know one big thing” – are substantially worse than your average Joe. By contrast, some expert ‘foxes’ who know many things are a tad better.
And of course prognosticating about the future is a human foible – we’re always doing it whether it’s useful or not. It can also involve status displays. I recall being invited to a dinner hosted by one of Australia’s noted businessmen during the GFC with 15 or so people around the table. No-one really wanted to discuss the nature of the events. Rather we were told by our host that he’d been talking to the head of Goldman’s in the US or some other great and powerful operator and he said so and so which involved some casual prediction. But it was painfully obvious to me – and I kept quiet about it because it would have been raining on others’ parade – that no-one really knew what would happen next and these corporate guys were less likely to know than a canny economist. But the conversation rolled on.
I’ve always thought it strange that when journos interview experts and discuss their predictions – for instance about the dollar – that they don’t also ask them for some casual variant of a confidence interval. Mightn’t the expert on such matters be expected to have an expert view on the value of his expertise? For instance there’s a literature on the extent to which past forecasts improve on simple rules (like predicting that today’s value will be tomorrow’s value or that tomorrow’s value will revert to the long run mean). If experts don’t volunteer such insights, shouldn’t they be asked for them? Oh but wait, the interviewer and the interviewee are all in it together – along with the listeners. The bullshitter, the bullshittee and the bullshitted.
Anyway, it’s a nice fantasy – one which might cut swathes through the industry of being and broadcasting talking heads.
In the meantime, I came upon this nice visual illustration of the ideas here.
The diagram being appealed to is the diagram above.
When you have any scientific theory, it has a range of validity. Think about that phrase for a minute: range of validity. What does that mean?
Let’s start with a small idea first to illustrate this: the idea that heat rises. Sure, you put a hot thing like a burning candle in a cool room, and the flame will heat up the air around it, and the hot air will rise. Sounds simple, until you ask the question, “how will the hot air rise?”
This doesn’t mean that there’s anything wrong with your theory that hot air rises, though. It means that your theory is incomplete, as are all scientific theories.And the answer to that one is more complex. Under some conditions (above right), the hot air will rise smoothly and the air will flow in a laminar fashion, while under other conditions (above left) the air will rise turbulently.
Which is excellent as far as it goes, but having a theory is the middle of the process. In order to have a theory, first you have to have a problem or at least be looking for one. A theory is a map for solving a problem, not for discovering if you have one. In fact I think most prognostication is a response to the question, “Is there going to be a problem in future?”.
But “problems” are not rationally discoverable. (I think quite a few people have pointed this out. Just can’t name them at present). However a quote from G F Smith might illuminate the issue.
” A problem is an undesirable situation that is significant to and may be solvable by some agent, although probably with some difficulty. Since a problem is an “undesirable situation,” it does not exist strictly as an objective state-of-the-world, nor as a subjective state of dissatisfaction.
A problem is a relationship of disharmony between reality and one’s preferences, and being a relationship, it has no physical existence. Rather, problems are conceptual entities or constructs. The term is an abstraction from the world of observables and is applied because it serves a useful function. Essentially the term is an attention-allocation device. Marking a situation as problematic is a means of including it in one’s “stack” of concerns, placing it on an agenda for future attention and solution efforts. Thus, there is an element of arbitrariness in labelling a situation as problematic.
One can apply the concept more or less liberally, depending on whether he prefers his attention to be loosely or tightly focused. In any complex, real world, situation, there are an unlimited number of concerns which could be identified as problems, but there are none which absolutely must be so identified.”
(from “Towards a heuristic theory of problem structuring” 1988 p 1491)
Sniffing out an “undesirable situation” that might occur in the future is obviously a pretty good idea. Phillip Tetlock’s “Fox” approach would certainly equip you better to manage it than to narrow the range of anticipated problems before you even start by hampering yourself with ideologies, frames, schemata etc. However that’s pretty hard to do since you can’t think without them.
Hubert Dreyfus has written some interesting stuff on “expertise” and one of the things he points out is how an expert has assimilated so much information that they can problem solve really rapidly but they can’t tell you how they did it. (It has become a bunch of heuristics). This is where it’s possible to use some really peculiar sensemaking devices. Can’t remember whether its Tetlock or his populariser, Dan Gardner, who says that if you have trouble making a decision, you can flip a coin, and before it lands, decide which side you would like it to come down on, and you’ll probably be right. Examining chicken entrails and shaking bones work more or less the same way, by focussing your attention on what you already know and how it matches events as you see them, allowing “background processing” to do its work.
Oh and BTW some of you who have stuck with this long comment till now might be wondering if a theory is the middle and a problem is the beginning, what’s the end? Deciding on how to apply what you have found out. Also a non rational and often highly political process.
“In order to have a theory, first you have to have a problem or at least be looking for one.”. ….”But “problems” are not rationally discoverable.”
No you don’t and yes they are. If you develop a theory well, then they direct your course of investigation and this will help you identify things that are important but were not necessarily apriori obvious (which may or may not be problems). I know many people like to find a problem and then develop a theory to try and explain the problem (sometimes going back and forth between data and theory), but there’s no apriori reason to — this is why there is a proliferation of one-theory-one-problem type work in many areas. This is bad science. I’m sure Australia’s biggest fan of Popper that also blogs (Rafe) can tell you all about that.
If you don’t believe this, then just look at 20th century phyics. Bose and Einstein, for example, didn’t go out of their way to find superfluids. They didn’t say: “I’d like to how to know how to create a fluid with all these weird properties.. how can this be done?”. They just had a theory, kept on developing it (presumably in a fairly methodical and rational way), and then found out that their theory predicted that fluids should have all sorts of weird behaviors when close to absolute zero. Only decades later that empirically was confirmed.
Agreed Conrad.
And that suggests that their problem was nothing to do with the empirical world. Their problem may well have been something like “Oh, look, here’s two mathematical constructs which look like they ought to be congruent but aren’t.” Now in my non mathematical way, I’m guessing there must be heaps of these, so how did they choose which ones to pursue? They may have had a hunch based on previous experience that one path would likely lead to more fruitful outcomes than another. However that would be because they were expert and therefore able to put forth better heuristic guesses on fruitfulness about that kind of problem than the rest of us.
[…] a very recent post I commented on the absence of the one signal in the public market for expertise that might really […]
For reference a later post