Today I’m talking about path dependence that leaves us with second rate technology.
The hypothesis is very simple, but very interesting. A society has a problem, and a number of technologies become possible solutions. One of these technologies makes a little more progress than the others – it could be because this technology makes the first step a little easier or just through complete randomness – but this progress meas that it becomes the focus of attention. Funding, the efforts of innovators and entrepreneurs and other resources start flowing towards it because it looks like the best bet. This leads to even more progress which attracts even more. The competitors are neglected and forgotten.
But unbeknown to all, one of these technologies had a brighter future or another has come along too late. If only there had been a little bit of early success it would have been developed into a much better technology than its more favoured rival. By the time this is realised, its far too late to swap over because we’re already locked into the other. The individual incentives to change are far weaker than the collective benefit would be. Because of an effectively random event in the past, we end up in a poorer future.
A very interesting idea and very intuitive (path dependence in general is clearly true) and with large ramifications. But is it useful?
The overwhelming problem is that counterfactuals are hard to find. We can’t (yet) look at alternate universes to see whether the technologies we pursued are inferior to those we didn’t. That makes it hard to confirm the hypothesis on the vast majority of candidates. This also makes it hard to avoid making similar mistakes. There may be innumerable better paths we could have taken, but without some way to recognise what they were, the idea is fairly pointless.
What do we do with the hypothesis then?
There have been efforts to identify cases based on real analysis. Unfortunately most treatments of the idea are content to stop at just two, both of which are far from convincing.
Dvorak. The keyboard layout we use today, QWERTY, was designed to combat the specific problems of early manual typewriters where the type had a potential to get stuck together. Although the problem was solved and now the manual typewriter is long dead, we still use the QWERTY system. Retraining typists was too hard. This is a good example of path dependence. DVORAK is another layout and it claims to be faster than QWERTY. Because of the vagaries of early typewriters, we might be stuck using a less efficient layout. Luckily DVORAK is fully developed, so it’s not hard to compare the two systems in a controlled study, but none except those run by the owners of DVORAK have shown any efficiency gains worth noting. The hypothesis is not supported.
Betamax. Betamax and VHS were early rivals to become the standard for consumer video tape. VHS was successful because of a larger library of content and longer play times, but lore has it that Betamax had better quality. This seems to be a confusion between the consumer Betamax and the professional Beta Cam. The latter had clear advantages and did indeed become the industrial standard. But even if we accept every claim in favour of Betamax, the ramifications are “In the last two decades of the 20th century, consumers had slightly lower picture quality for home movies than they would have otherwise”. This is not very shocking.
So I went looking. Here are some others that are suggested, either explicitly in favour of the hypothesis or implicitly.
The Intenal Combustion Engine. Electric cars were not a baby of the oil shocks, but go back right to the beginning of automobiles. Internal combustion engines became dominant because they had greater range and were cheaper to manufacture and run. When the oil shocks negated the running costs, there arose an idea that we were stuck with a suboptimal technology. But for a few successes by petrol cars in the early 20th centuries, we could be driving cheap, clean and efficient electric cars [fn1]!
There’s a number of problems with this idea. Electric cars, and not petrol cars, were the cars with the early success. The limits on electric cars were mainly related to the capacity of the battery, the cost of the battery and the time the battery took to recharge. These problems remain despite the fact that battery technology continued to be used and developed elsewhere . We don’t even need to think about the lack of a charging stations. Petrol cars may well be suboptimal, but this is because of the failure to price resources and pollution correctly, not path dependence.
Hanzi. In the Western Mediterranean phonetic writing systems arose thousands of years ago and these became the ancestors of the Roman and Greek alphabets. In China a pictographic writing system arose and became Hanzi (??); Chinese characters. As Jared Diamond sees it, the early adoption of a pictographic system and unawareness of phonetic systems in the East limited literacy and more importantly meant Chinese were unable to fully adopt printing after they invented it.
Yet China’s neighbours had been using Hanzi phonetically for hundreds of years before Gutenburg. They had also invented kanas (in Japan) and hangul (in Korea) to replace it. Even in China itself Kublai Kahn had commissioned a phonetic script, based on Uighur but designed for all the languages of his empire. The failure of his script and the slow adoption of hangul was path dependence, but not in the sense of the hypothesis. It was a product of special interests (officials who didn’t want to make it easier people that weren’t their children to train for bureaucratic exams) rather than the individual switching costs implied by the hypothesis. Swapping systems has not proved costly otherwise in Korea or Vietnam. Nor has it prevented the effective dominance of pinyin, the romanised Chinese that is used to input text into computers. Because of the quantity of electronic text we are rapidly approaching the point where the majority of Chinese ever written will have been written (although not read) in a phonetic script – that’s if we haven’t already got there.
Electricity – I read – I can’t remember where – a discussion of the hypothesis that gave “AC power” as a throwaway example of an inferior technology we’re stuck with. As far as I know the reasons for the success of Alternating Current in the Current Wars – its lower transmission losses compared to Direct Current and the relative ease of transformation – are as advantageous as they ever were. We could make a claim for inferiority of the lower voltage that is standard in the US and Japan, but effects seem limited to a preference for stove top kettles over electric kettles.
Nuclear Power [I Forgot this section the first time] – Nuclear power never delivered on its promise for electricity “too cheap to meter”. It wasn’t hyperbolic PR either, but the genuine belief of anyone who had learned about Mass-Energy equivalence (E=MC^2). It seems implausible that the promise isn’t still there, so path dependence is a natural hypothesis to look at.
One such claim is that uranium and plutonium were pursued because of their potential to produce weapons, neglecting the potential for thorium – although India’s experience hasn’t been exciting. The current thorium projects give a rare opportunity so see if the neglected technology goes anywhere so I’ll reserve judgement.
A similar hypothesis is that the development of light water reactors in nuclear submarines made them the obvious model for commercial reactors where they became entrenched at the expense of alternatives. Unfortunately we once again run into the problem of unobserved alternate universes. We can’t tell if the untried alternatives really were better.
The least plausible hypothesis, which is by far the most popular amongst current advocates, is that nuclear power is the victim of irrational fears, of environmentalists and of Jane Fonda. If only there was the will to pursue it! The invocation of The China Syndrome and of Three Mile Island are interesting because they highlight the fact that in 1979 – after 30 years of heavy funding and massive support from governments and during an oil shock that shot up the price of rival fuels – nuclear power was nowhere near financially viable. One does not need environmentalists to explain why nuclear power managed to thrive only where weapons were desired or where corporatism was rife.
Rail gauges – The adoption of different standard gauges is a good example of an inefficient outcome due to path dependence, but its not due to the inferiority of one gauge over another, merely the fact they don’t work together.
Esperanto – Unfortunately Esperanto is almost a punchline by itself, so I can’t give it a proper examination.
Maybe there’s some others I can think of..
Models – John Quiggin explains his use of neoclassical economics this way.
Mainstream economics provides a set of tools (the theory of public goods, externality and market failure, taxation and income distribution) to do the analysis and a widely-understood language in which to express the results. No existing alternative body of thought in economics comes close to this.
If I understand the heterodox side of the famous Cambridge controversy correctly, the terms in which I express my case (relative rates of return to equity and bonds) are logically incoherent. But I have no idea how I would make my case if I were to use, say, the theoretical framework promoted by the late Piero Sraffa. It may be that, if the existing body of economic analysis were replaced by an entirely new theory developed on different premises, we could derive a better analysis. But I only have one life, and I’d rather devote it to promoting better policy outcomes than to relaying foundations.
In short, his incentives to swap are tiny, even if there was demonstrably a better universe out there. This fits the hypothesis well. He could have added that even if he had the body of analysis to produce answers it wouldn’t get published or understood or read by policymakers unless they’d all been trained. General Equilibrium is perhaps the best example. I don’t think I’ve met an (Australian) economist who likes it, but there’s nothing else. Even bad answers are preferred to no answers, so people are still willing to pay for it.
Maybe if the Marshalls or Paul Samuelson had written shittier books a better system would have got a head start, but we’ll never know.
I titled this section “models” because it may not be confined to economics alone.
Automobiles – Distinct from how the vehicles are powered, maybe the whole idea of individual vehicles designed to transport people who are tiny fractions of their own volume and driven by amateurs was a wrong turn. Even if we fixed the issue of fuel, we’d still have congestion, parking issues and the fact that we’re not very good at piloting high speed metal – a fact that is tragically manifested in the road toll. But the decision to take up automobiles (instead of, say, mass transit or high density) resulted in a path dependence that extends to land laws and the shapes of entire cities, especially the ones we like least. Switching now would be very costly.
Again we can only know by looking at other universes.
The idea of path dependence is invaluable in many fields and the idea of it leading to suboptimal technology is very interesting and may well be true. But “true” doesn’t mean “useful”. On the existing evidence, it may not matter.
P.S About 3/4 of the way through this post I came across this review article that ended up treading very similar ground despite being about technological path dependence in a broader sense than just suboptimal outcomes. I think it leads one to the same conclusions but also deals in far greater depth with the DVORAK and Betamax cases.
[fn1] That’s when the hypothesis wasn’t an oil company conspiracy