Ideas that might not matter: Inefficient technological path dependence

Part one of a intermittent series on interesting ideas that might not be useful.

Today I’m talking about path dependence that leaves us with second rate technology.

The hypothesis is very simple, but very interesting. A society has a problem, and a number of technologies become possible solutions. One of these technologies makes a little more progress than the others – it could be because this technology makes the first step a little easier or just through complete randomness – but this progress meas that it becomes the focus of attention. Funding, the efforts of innovators and entrepreneurs and other resources start flowing towards it because it looks like the best bet. This leads to even more progress which attracts even more. The competitors are neglected and forgotten.

But unbeknown to all, one of these technologies had a brighter future or another has come along too late. If only there had been a little bit of early success it would have been developed into a much better technology than its more favoured rival. By the time this is realised, its far too late to swap over because we’re already locked into the other. The individual incentives to change are far weaker than the collective benefit would be. Because of an effectively random event in the past, we end up in a poorer future.

A very interesting idea and very intuitive (path dependence in general is clearly true) and with large ramifications. But is it useful?

The overwhelming problem is that counterfactuals are  hard to find. We can’t (yet) look at alternate universes to see whether the technologies we pursued are inferior to those we didn’t. That makes it hard to confirm the hypothesis on the vast majority of candidates. This also makes it hard to avoid making similar mistakes. There may be innumerable better paths we could have taken, but without some way to recognise what they were, the idea is fairly pointless.

What do we do with the hypothesis then?

There have been efforts to identify cases based on real analysis. Unfortunately most treatments of the idea are content to stop at just two, both of which are far from convincing.

Dvorak.  The keyboard layout we use today, QWERTY, was designed to combat the specific problems of early manual typewriters where the type had a potential to get stuck together. Although the problem was solved and now the manual typewriter is long dead, we still use the QWERTY system. Retraining typists was too hard. This is a good example of path dependence. DVORAK is another layout and it claims to be faster than QWERTY. Because of the vagaries of early typewriters, we might be stuck using a  less efficient layout. Luckily DVORAK is fully developed, so it’s not hard to compare the two systems in a controlled study, but none except those run by the owners of DVORAK have shown any efficiency gains worth noting. The hypothesis is not supported.

Betamax. Betamax and VHS were early rivals to become the standard for consumer video tape. VHS was successful because of a larger library of content and longer play times, but lore has it that Betamax had better quality. This seems to be a confusion between the consumer Betamax and the professional Beta Cam. The latter had clear advantages and did indeed become the industrial standard. But even if we accept every claim in favour of Betamax, the ramifications are “In the last two decades of the 20th century, consumers had slightly lower picture quality for home movies than they would have otherwise”. This is not very shocking.

So I went looking. Here are some others that are suggested, either explicitly in favour of the hypothesis or implicitly.

The Intenal Combustion Engine. Electric cars were not a baby of the oil shocks, but go back right to the beginning of automobiles. Internal combustion engines became dominant because they had greater range and were cheaper to manufacture and run. When the oil shocks negated the running costs, there arose an idea that we were stuck with a suboptimal technology. But for a few successes by petrol cars in the early 20th centuries, we could be driving cheap, clean and efficient electric cars 1!

There’s a number of problems with this idea. Electric cars, and not petrol cars, were the cars with the early success. The limits on electric cars were mainly related to the capacity of the battery, the cost of the battery and the time the battery took to recharge. These problems remain despite the fact that battery technology continued to be used and developed elsewhere . We don’t even need to think about the lack of a charging stations. Petrol cars may well be suboptimal, but this is because of the failure to price resources and pollution correctly, not path dependence.

Hanzi.  In the Western Mediterranean  phonetic writing systems arose thousands of years ago and these became the ancestors of the Roman and Greek alphabets. In China a pictographic writing system  arose and became Hanzi (??); Chinese characters. As Jared Diamond sees it, the early adoption of a pictographic system and unawareness of phonetic systems in the East limited literacy and more importantly meant Chinese were unable to fully adopt printing after they invented it.

Yet China’s neighbours had been using Hanzi phonetically for hundreds of years before Gutenburg. They had also invented kanas (in Japan) and hangul (in Korea) to replace it. Even in China itself Kublai Kahn had commissioned a phonetic script, based on Uighur but designed for all the languages of his empire. The failure of his script and the slow adoption of hangul was path dependence, but not in the sense of the hypothesis. It was a product of special interests (officials who didn’t want to make it easier people that weren’t their children to train for bureaucratic exams) rather than the individual switching costs implied by the hypothesis. Swapping systems has not proved costly otherwise in Korea or Vietnam. Nor has it prevented the effective dominance of pinyin, the romanised Chinese that is used to input text into computers. Because of the quantity of electronic text we are rapidly approaching the point where the majority of Chinese ever written will have been written (although not read) in a phonetic script – that’s if we haven’t already got there.

Electricity – I read  – I can’t remember where – a discussion of the hypothesis that gave “AC power” as a throwaway example of an inferior technology we’re stuck with. As far as I know the reasons for the success of Alternating Current in the Current Wars – its lower transmission losses compared to Direct Current and the relative ease of transformation – are as advantageous as they ever were. We could make a claim for inferiority of the lower voltage that is standard in the US and Japan, but effects seem limited to a preference for stove top kettles over electric kettles.

Nuclear Power 2 – Nuclear power never delivered on its promise for electricity “too cheap to meter”. It wasn’t hyperbolic PR either, but the genuine belief of anyone who had learned about Mass-Energy equivalence (E=MC^2). It seems implausible that the promise isn’t still there, so path dependence is a natural hypothesis to look at.

One such claim is that uranium and plutonium were pursued because of their potential to produce weapons, neglecting the potential for thorium – although India’s experience hasn’t been exciting. The current thorium projects give a rare opportunity so see if the neglected technology goes anywhere so I’ll reserve judgement.

A similar hypothesis is that the development of light water reactors in nuclear submarines made them the obvious model for commercial reactors where they became entrenched at the expense of alternatives. Unfortunately we once again run into the problem of unobserved alternate universes. We can’t tell if the untried alternatives really were better.

The least plausible hypothesis, which is by far the most popular amongst current advocates, is that nuclear power is the victim of irrational fears, of environmentalists and of Jane Fonda. If only there was the will to pursue it! The invocation of The China Syndrome and of Three Mile Island are interesting because they highlight the fact that in 1979 – after 30 years of heavy funding and massive support from governments and during an oil shock that shot up the price of rival fuels – nuclear power was nowhere near financially viable. One does not need environmentalists to explain why nuclear power managed to thrive only where weapons were desired or where corporatism was rife.

Rail gauges – The adoption of different standard gauges is a good example of an inefficient outcome due to path dependence, but its not due to the inferiority of one gauge over another, merely the fact they don’t work together.

Esperanto – Unfortunately Esperanto is almost a punchline by itself, so I can’t give it a proper examination.

 Maybe there’s some others I can think of..

Models – John Quiggin explains his use of neoclassical economics this way.

Mainstream economics provides a set of tools (the theory of public goods, externality and market failure, taxation and income distribution) to do the analysis and a widely-understood language in which to express the results. No existing alternative body of thought in economics comes close to this.

If I understand the heterodox side of the famous Cambridge controversy correctly, the terms in which I express my case (relative rates of return to equity and bonds) are logically incoherent. But I have no idea how I would make my case if I were to use, say, the theoretical framework promoted by the late Piero Sraffa. It may be that, if the existing body of economic analysis were replaced by an entirely new theory developed on different premises, we could derive a better analysis. But I only have one life, and I’d rather devote it to promoting better policy outcomes than to relaying foundations.

In short, his incentives to swap are tiny, even if there was demonstrably a better universe out there. This fits the hypothesis well. He could have added that even if he had the body of analysis to produce answers it wouldn’t get published or understood or read by policymakers unless they’d all been trained. General Equilibrium is perhaps the best example. I don’t think I’ve met an (Australian) economist who likes it, but there’s nothing else. Even bad answers are preferred to no answers, so people are still willing to pay for it.

Maybe if the Marshalls or Paul Samuelson had written shittier books a better system would have got a head start, but we’ll never know.

I titled this section “models” because it may not be confined to economics alone. 

Automobiles – Distinct from how the vehicles are powered, maybe the whole idea of individual vehicles designed to transport people who are tiny fractions of their own volume and driven by amateurs was a wrong turn. Even if we fixed the issue of fuel, we’d still have congestion, parking issues and the fact that we’re not very good at piloting high speed metal – a fact that is tragically manifested in the road toll. But the decision to take up automobiles (instead of, say, mass transit or high density) resulted in a path dependence that extends to land laws and the shapes of entire cities, especially the ones we like least. Switching now would be very costly.

Again we can only know by looking at other universes.

The idea of path dependence is invaluable in many fields and the idea of it leading to suboptimal technology is very interesting and may well be true. But “true” doesn’t mean “useful”. On the existing evidence, it may not matter.

P.S About 3/4 of the way through this post I came across this review article that ended up treading very similar ground despite being about technological path dependence in a broader sense than just suboptimal outcomes. I think it leads one to the same conclusions but also deals in far greater depth with the DVORAK and Betamax cases.

1 That’s when the hypothesis wasn’t an oil company conspiracy

  1. fn1[][]
  2. I Forgot this section the first time[]

About Richard Tsukamasa Green

Richard Tsukamasa Green is an economist. Public employment means he can't post on policy much anymore. Also found at @RHTGreen on twitter.
This entry was posted in Innovation. Bookmark the permalink.
Subscribe
Notify of
guest

32 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
alex
alex
11 years ago

With the ICE what about ‘lock-in’ for that in terms of carbon.. or more generally ‘carbon lock-in’?

All of the infrastructure around ICE/cars – including roads,suburb/housing design, re-fuelling stations, plus Government programs have made a transition to a low carbon alternative more difficult than if they were merely competing now when we have ‘full information’ about the actual ‘cost’ of using ICE.

John Quiggin
John Quiggin
11 years ago

The most convincing case, I think was the long dominance of MS-DOS and then Windows in the desktop computer market.

MS-DOS systems massively outsold Macs before MS introduced Windows 95, the first usable graphical interface for machines based on the IBM PC design. But as soon as Win-95 became available, MS-DOS disappeared

That seems like pretty clear evidence that MS-DOS was in fact inferior, but was kept alive by lock-in. And, although the difference is much less, Windows has remained inferior to Mac OS, and is only now being displaced.

crocodile
crocodile
11 years ago

All commercial effort should now be directed to the invention of the crystal ball. This will ensure that we will never be inflicted with this type of problem again.

John Quiggin
John Quiggin
11 years ago

The heat has gone out of the OS wars, in part because the differences between Mac OS X and Windows 7 are now so minor, and even more because of the dominance of mobile computing (which has been great for Apple but a mixed blessing at best for Mac OS) . That in turn reflects another qualification on the lock-in argument. In the end, a way will probably be found to modify the locked-in inferior choice so that it captures the most important benefits of the locked-out alternative.

meika
11 years ago

Look’s like evolution and ecology to me. The successful mutations condition their environment to suit themselves, other mutations (not always later) might be more bodily more efficient but cannot extended their phenotype into the interactions around them. Evolutionary success here is define by how the environment is affected. Lock-in would be just how it looks from the inside.

The recumbent bicycle is another, the design is 100 years old, it was banned from competition in the 1930s in France (by fiat I guess, this is why history written based on looking at Monarch is interesting: the arbitrary and idiosyncratic decisions dominate the masses Catherine the Great of Russia)

The banning meant the upright bicycle got the kudos and so all the industry attention and effort. The recumbent is faster because there is a lot less wind resistance. I own one and as I ride into a headwind to and from work most days (tailwinds are incredibly rare) the recumbent is an ideal choice for commuting. It’s disadvantages are that wind resistance is not a huge problem when going uphill, so then its slower, and you can’t monkey around on them like you can on upright, though you can find young people monkeying around on them on youtube.

Also the better bike design is still out there:
http:www.newscientist.com/article/mg21028141.700-bike-to-the-drawing-board.html

(Some old mountain bike competition riders with neck problems ride recumbents because they are easier on the neck. And you see a lot more of whats going on. Try walking around with a neck crimped like you do when riding an upright, see how long you last.)

meika
11 years ago

I’m atop 26″ or 600mm wheels and I am lower but not that low (admittedly the human powered vehicle landspeed records are all held by recumbents in which one basically lies between the wheels, but I sit waaaay above them. (Recumbents atop 700mm wheel frames are now available too.

I think part of the fear factor is that they look unusual. This another part of the lock-in. Success depending on how a mutation/design affects the environment around it, not just the tech efficiency of the form in itself. You’re scared? That’s part of the lock-in success of the upright.

wilful
wilful
11 years ago

I’m claiming no expertise, but I understand that gas powered fridges are just as good if not better than electric ones? But that due to successful marketing in the 1920s they didn’t gain critical mass?

Tel
Tel
11 years ago

The normal description of this problem (outside economics) is that you have local minima within the space you are trying to optimize. The well known answer is to search in a broader pattern (i.e. try more different things in more different places). To some extent, this is one of the few sensible arguments against globalization, because the more global our economy becomes, the narrower the search space becomes. It’s not a huge danger, because in the techo sphere, someone always does it different, just because.

As far as I know the reasons for the success of Alternating Current in the Current Wars – its lower transmission losses compared to Direct Current and the relative ease of transformation – are as advantageous as they ever were.

Not really. The basic AC 50Hz transformer has the advantages that it is low-tech, robust, requires little maintenance, and a well established design. It has the disadvantages that it is physically large, heavy, requires a lot of copper, requires disgusting transformer oil, and in a modern context is expensive to build and transport.

If you remember maybe 10 years ago, the typical plug-pack that went into a wall socket was also big and heavy, but the modern equivalent is lightweight and small, and usually more efficient. That’s because silicon switching has taken over and is cheaper, smaller, more efficient. Same could apply to the heavy street transformers, but we aren’t that far down the track just yet — the rise of silicon power electronics has been a relatively recent thing (and no doubt copper prices will ensure we keep going in the same direction).

I think there’s a good case for 50 VDC circuits in the average home (i.e. short distance) for a number of reasons:

* Once upon a time we used a small number of high power appliances, but now we use a large number of low power appliances.

* More and more appliances are getting embedded CPUs as these things get cheaper.

* 50 volts is much more difficult to electrocute yourself with.

* There’s a desire to combine power with data in a single cable, USB does this on the desktop (distance up to 2m), but something like a high power version of powered ethernet would be about right for home and office (distances approx 100m).

But DC has a bad side that most people haven’t thought about, and that’s extinguishing an arc — AC will extinguish an arc very easily so mechanical wall switches and mechanical circuit breakers are just dandy; DC will NOT extinguish the arc, so basically every single wall switch and every circuit breaker needs to be replaced with an intelligent silicon device. That’s the big hurdle.

A similar hypothesis is that the development of light water reactors in nuclear submarines made them the obvious model for commercial reactors where they became entrenched at the expense of alternatives. Unfortunately we once again run into the problem of unobserved alternate universes. We can’t tell if the untried alternatives really were better.

Not true at all. The light water designs became standard in the USA but Canada happily developed a heavy water design that became the basis for India’s reactors, and might be promising for Australia too.

Nuclear power never delivered on its promise for electricity “too cheap to meter”. It wasn’t hyperbolic PR either, but the genuine belief of anyone who had learned about Mass-Energy equivalence…

France seems to have managed to deliver cheap electricity, but like any auction the price is set by the second best alternative, and if there’s money to be made, you can bet there will be meters to collect that money. Besides, with nuclear power you are not paying for the energy itself, you are paying for all the ancillaries (like safety equipment, inspections, and waste disposal).

In NSW, I’ve got the approx calculation that for every kWh sold (mostly coal and gas power), about 5c goes to the electricity producers (a competitive industry), about 5c goes to the retail and billing companies (also a competitive industry) and about 10c goes to the man-in-the-middle who does wiring and metering (government owned monopoly), and 2c to GST. Presumably if production of electricity cost nothing at all, we would still be paying at least 16c per kWh, but most likely we would be paying exactly the same because householders have nowhere else to turn.

Paul Frijters
Paul Frijters
11 years ago

some easier examples that have libraries of research on them: the US still not on the metric system, the English still driving on the left, New Zealand independent, tax havens unmolested.

Meika is right. As soon as you realise you should think of this as evolution you can borrow the whole of the literature on suboptimal design and left-over inefficiencies in nature. Appendices, excessive hair growth, the male nipple, etc. Optimal for some things at some time, but not for us here now.

Tony Healy
Tony Healy
11 years ago

On the topic of electrical power, AC was actually the superior, not inferior, technology.

Initially it had to counteract a disinformation campaign by DC enthusiasts and corporate interests. It was also much harder to understand, requiring a good understanding of physics and what these days we call electrical engineering.

Tel, the references to transformers have nothing to do with the little devices used for domestic devices in the home. They refer to the ability to step AC power up and down to different voltages, which makes it feasible to distribute power at very high voltages and still use it within the house at safe voltages. High voltage distribution incurs much less loss than low voltage distribution, and has thus made possible the efficient, centralised generation of electrical power.

John Foster
John Foster
11 years ago

JQ says: “Mainstream economics provides a set of tools (the theory of public goods, externality and market failure, taxation and income distribution) to do the analysis and a widely-understood language in which to express the results. No existing alternative body of thought in economics comes close to this.”

I beg to differ. In relation to path dependence, evolutionary economists have been using replicator dynamic analysis, suitably modified from evolutionary biology, for some time to explain how path dependence comes about. Another coherent body of thought does exist and I am amazed that JQ ignores it.

And RTG’s notion that path dependence is “true” but not ‘useful” is nonsensical. Studies of path dependence are analyses of historical processes. Such studies are not necessarily ‘true’ because of their subjective content but they are, without doubt, useful in helping us understand how economic systems actually work. Mainstream economic analysis, based upon constrained optimization, is timeless, (equilibrium) outcome analysis. As such, it is patently not ‘true’ given the very abstract assumptions necessary to indentify equilibrium, timeless states. If this is the case, how can it be ‘useful’ in any scientific sense? Indeed, in Zombie Economics, JQ spends a lot of time explaining why, in many cases, the use of such theory is dangerously misleading. JQ’s defence here is that mainstream economics provides some kind of language that economists understand. But what good is a language that incorrectly conveys meaning?

The economic historian,Paul David, one of the pioneers in explaining how path dependence comes about, has come under attack from mainstream economists for many years. Even though complex system theorists, such as Brian Arthur, have explained clearly why lock in comes about, we see little or no acknowledgement of this in mainstream economics. It is clear that many mainstream economists are unwilling to admit such analysis because it has a devastating effect on their theorizing. As Paul Samuelson mused a long time ago (1972): “When the equilibrium of a system depends on (and is dictated) by its path toward equilibrium, the scientist has an uncomfortable feeling.”

derrida derider
derrida derider
11 years ago

What Tel said about AC versus DC. Though its not only voltage transformers but that AC generators/motors were simpler, cheaper and more efficient in the pre-silicon era. Though you’re wrong about transmission losses – DC over long distances has much LOWER transmission costs than AC (see previous discussions on this blog of renewable energy grids). The higher the voltage the less the resistive loss, but the longer the distance the more AC reactive loss. Plus DC doesn’t need synchronisation which makes network design much simpler.

I think if you were electrifying the world from scratch with today’s technology you’d think about a DC smart network – silicon, even high-voltage silicon, is damned cheap these days. But then would a rational planner want to forego 100+ years of using a technology that was clearly superior in all that time (AC) so as to persist with another technology (DC) that only now may have become superior?

Ranking of technologies as “superior” or “inferior” can change over time depending on quite exogenous advances in knowledge, or even just on social and demographic change. It’s another layer of difficulty for testing your hypothesis.

Tel
Tel
11 years ago

Tony, you are right to say that AC was the superior technology, but times do change. The small 50Hz AC transformers used in the home utilize the exact same physics as the big transformers in the street, and they do indeed perform the same job which is stepping between different voltages — a job that is better done these days by silicon switch-mode technology (although the silicon does not scale as nicely from small to large, but it can be done).

You don’t have to explain to me that high voltage long distance distribution is more efficient than low voltage distribution, but high voltage DC does have advantages with respect to being asynchronous, which was one of the reasons the New Zealanders used a DC-link between North and South islands. So if you are advocating the advantages of AC for long distance distribution, then you are already out of date because that segment of the market moved to DC a while ago.

This is another example of the rise and rise of silicon power electronics. NZ started out with mercury arc rectifiers, but these were all replaced with roomfulls of silicon (only recently), the takeover is in progress.

Tel
Tel
11 years ago

Though its not only voltage transformers but that AC generators/motors were simpler, cheaper and more efficient in the pre-silicon era.

The design of motors has mostly changed for the better (especially if you own a Neodymium mine) because the type of brushless DC motors you find in computer hard drives, floppy drives and CDROM drives require complex control circuits that simply were not available in Tesla’s day and age.

The old workhorse induction motors require AC, they are easy to build, reliable, low maintenance, but they are heavy, use lots of copper and iron, not as efficient as modern motors, and the speed/torque curve is intrinsic to the motor (since modern motors require a control circuit, they always come with a variable speed controller built in, no extra cost).

ted
ted
10 years ago
Reply to  Tel

I think you are confused. there are many uses for motors and many different types. The use dictates which type is better suited. AC motors are not going away soon. Quite the opposite. The DC motors you mention are suited for use as servos.

Paul Frijters
Paul Frijters
11 years ago

you motivate the post by saying you are looking for path dependence which gets ‘us’ (society) into an inefficient technology. That is a coordination failure and your own examples are all cases of coordination failures too, as are mine. Perhaps you might want to re-state your question as to what you are looking for.

Michael
Michael
11 years ago

Cheap electricity has locked us into poor building technology and exposed us to costlier than necessary peak electricity generation. Cheap air-conditioning and low electricity prices have crowded out good building design and construction and locked us into heavy air-conditioner use, where the costs of running air-conditioning on hot days is subsidised by those unable to afford air-conditioning.
It is much more expensive to build an energy efficient dwelling in Australia because materials and knowledge are undeveloped and remain niche products.
This is not exactly technological, but the decision to allow doctors to withhold medical records has also locked in place inefficiencies that are probably reducing health care outcomes.

Cameron Murray
11 years ago

The main point of the post seems intuitively correct to me – that the idea is very powerful, but real life examples where path dependence has led to great social problems/inefficiencies, are not so easy to find.

One reason for this surprising observation is that not only is there path dependency in technology, but path dependency in research and innovation, and in consumer preferences. There is no right or wrong way to power a car, or which rail gauge to use. And both consumer preferences and innovation occurs along the same technology path.

In the software example, maybe one system was objectively better, but if users are trained in one system and not the other, then that system become their preference.

Further, in the rail example, commuters benefit from not having to switch trains on long routes etc, and thus they encourage integration (or overall efficiency) over other short haul efficiency outcomes.

You link provides some better criteria for when path dependence can be a problem – durable capital investment. Which makes me think about city development. It is very hard to reverse decisions about where to put roads and lay out new areas of the city. Once you haven it is very difficult/expensive to retrofit new roads and transport corridors. You have to buy back a lot of private land (reconfigure existing thoroughfares) or tunnel.

Look at Brisbane’s cross-river rail problem. We have only one rail bridge across the river. That is because the network evolved to benefit most from the first bridge. Now that the bridge is at capacity it is very difficult to tap into the network to make a new crossing.

We also have cities evolve to take advantage of transport corridors that might become outdated in short periods.

What’s the point of this rant? That I agree with you main observation, and utility impacts are likely to be very low in most cases. However, in the situation with the most durable capital investments, and most people affected buy the decisions (cities) I think there may be some (probably unquantifiable) impacts.

Steve
Steve
11 years ago

Perhaps the laborious pace at which video and and music content is being made available online via direct download, at a reasonable price and in a timely fashion can be explained along these lines.

We have recording industries because historically we needed someone to make the physical copies and distribute them. Getting the content to consumers was the major hurdle. Now these industries are obsolete, but they are so tied into whole chain from creative production right through to delivery to the end consumer that they are using their position to resist the demise of their own obsolete role.

john
john
11 years ago

The story of QWERTY , as told by Steven J Gould, is a bit more interesting. By about 1880 the accurate machining technology needed to make a keyboard that could cope with two adjacent keys being pressed in quick succession, was widely available and a number of competing more efficient keyboard designs were on the market.
A competition was organised for a World Fair, by chance the biggest maker of the QWERTY hired a typist who had just invented touch-typing not surprisingly ‘touch-typing’ won the day…. for QWERTY .

john
john
11 years ago

wilful on April 26, 2012 at 10:20 am

Gas fridges do cause more than a few fires(and the odd death)

Tel
Tel
11 years ago

Steve, that’s a valid point, but I think it’s more to do with the way the mega-corp has formed in our society rather than any technical issue. For example, Sony owns the rights to a lot of music, and owns the patents on various recording and playback technologies, and sells consumer equipment, and sells professional recording equipment, and has links into movie and music studios, and even the computer game industry.

Such a structure produces rigidity and inertia.

john
john
11 years ago

Tel a small point
a degree of “rigidity and inertia” is desirable in something as complex as society, otherwise it would be “madman’s custard”… “all over the place”.

The point of punctuated equilibrium is that most of the time evolution goes nowhere much, it is only when there is a fairly big and unexpected event that major new forms appear , a inherently chancy phenomena.

Yobbo
11 years ago

MS-DOS outsold Apple products because it was the only OS capable of running Lotus 1-2-3, the most important application of the early age of personal computing.

John Quiggin
John Quiggin
11 years ago

The Lotus 1-2-3 part of the story strengthens the lock-in argument. Excel was available on the Mac from 1985, and was far superior, as was shown when the two eventually competed directly. Similarly, MS Word for Mac was much better than the MS-DOS product of the same name – today’s versions are all descendants of the Mac product.

One point illustrated by this is that it is going to be hard to find neat examples of lock-in for a single product (QWERTY vs Dvorak, for example). Rather you get a whole ecosystem which is resilient to shocks in one component.

Yobbo
11 years ago

MS Word for Mac was much better than the MS-DOS product of the same name

Possibly, but it wasn’t better than Wordperfect. Lotus 1-2-3 was better than any other spreadsheet available on any platform of the time, and by the time Excel was released it was way too late for the Mac to catch up. Businesses had already made their purchasing decisions and software developers had already picked sides.

The selection of software available on Apple PCs was awful in comparison to DOS/Windows machines even into the 2000’s.

john walker
john walker
11 years ago

Yobbo
Suggest the biggest difference was in the number of computer games available.
BTW I have read that beta lost out as the standard video format because the American porn industry did not adopt it.

Yobbo
11 years ago

More and better quality games were available for the Amiga which also lost out, but at the time was a serious competitor to Apple and DOS machines.

Sancho
Sancho
11 years ago
Reply to  Yobbo

I don’t recall the Amiga being a competitor to Apple or DOS in anything other than games. The Amiga’s graphic abilities made it great for games and media manipulation, and superior to consoles of the time, but business applications seemed sub-par, and outclassed by PCs and Macs.

Commodore eventually directed the Amiga into a primarily gaming role because that was its niche, but went bankrupt anyway.