Courtesy of Slashdot (I think) I came across this interesting article reporting arguments that string theory has been the death of physics, or rather that it has basically taken it down a blind alley. Though many disciplines have suffered from ‘physics envy’, none less than economics, it looks like physics has succumbed to a nasty dose of what other ‘less hard’ sciences like economics have suffered from for ages – what I’ve just decided to call ‘epistemological bubbles’.I’m no physicist (or as I heard someone saying to someone a while back ‘I’m no rocket surgeon’). However, at least if it is to be believed, the story seems familiar. A discipline gets seduced into biting off more than it can chew by grand dreams of a ‘theory of everything’. Of course every now and again some grand theory does come along. Newton and Einstein managed theories of everything. Keynes and Smith in economics. (Well not everything but theories of a great deal).
But when you’re not smart enough, or there’s no theory to be had the result of such striving is ’empty science’. Somehow the necessary dialectic between theory and practice, ideas and facts becomes sterile and a few decades pass with little real progress – sometimes there’s even regress.
Here’s an edited version of the article on string theory.
“When it comes to extending our knowledge of the laws of nature, we have made no real headway” in 30 years, writes physicist Lee Smolin of the Perimeter Institute in Waterloo, Canada, in his book, “The Trouble with Physics,” also due in September. “It’s called hitting the wall.”
He blames string theory for this “crisis in particle physics,” the branch of physics that tries to explain the most fundamental forces and building blocks of the world.
String theory, which took off in 1984, posits that elementary particles such as electrons are not points, as standard physics had it . . . 1 vibrations of one-dimensional strings 1/100 billion billionth the size of an atomic nucleus.
Different vibrations supposedly produce all the subatomic particles from quarks to gluons. Oh, and strings exist in a space of 10, or maybe 11, dimensions. No one knows exactly what or where the extra dimensions are, but assuming their existence makes the math work. . . .
But one thing they haven’t done is coax a single prediction from their theory. . . .
Partly as a result, string theory “makes no new predictions that are testable by current _ or even currently conceivable _ experiments,” writes Prof. Smolin. “The few clean predictions it does make have already been made by other” theories.
Worse, the equations of string theory have myriad solutions, an extreme version of how the algebraic equation X2 4 has two solutions (2 and -2). The solutions arise from the fact that there are so many ways to “compactify” its extra dimensions _ to roll them up so you get the three spatial dimensions of the real world. With more than 10 raised to 500th power (1 followed by 500 zeros) ways to compactify, there are that many possible universes.
“There is no good insight into which is more likely,” concedes physicist Michael Peskin of the Stanford Linear Accelerator Center.
If string theory made a prediction that didn’t accord with physical reality, stringsters could say it’s correct in one of these other universes. As a result, writes Prof. Smolin, “string theory cannot be disproved.” By the usual standards, that would rule it out as science.
String theory isn’t any more wrong than preons, twistor theory, dynamical triangulations, or other physics fads. But in those cases, physicists saw the writing on the wall and moved on. Not so in string theory.
“What is strange is that string theory has survived past the point where it should have been clear that it wouldn’t work,” says Mr. Woit. Not merely survived, but thrived. Virtually every young mathematically inclined particle theorist must sign on to the string agenda to get an academic job. By his count, of 22 recently tenured professors in particle theory at the six top U.S. departments, 20 are string theorists.
One physicist commented on Mr. Woit’s blog that Ph.D. students who choose mathematical theory topics that “are non-string are seriously harming their career prospects.”
To be fair, string theory can claim some success. A 1985 paper showed that if you compactify extra dimensions in a certain way, the number of quarks and leptons you get is exactly the number found in nature. “This is the only idea out there for why the number of quarks and leptons is what it is,” says Prof. Peskin. Still, that is less a prediction of string theory than a consequence.
Like I said, I’ve no idea who’s right in physics, but it’s eerily similar to what one might call ‘epistemological bubbles’ we’ve seen elsewhere. We’re accustomed to quoting old dead sciences like phrenology. But modern sciences have bad problems like this too.
Economics has spend more than it’s fair share of time barking up the wrong trees and on wild goose chases. Often the reason is hubris – the practitioners get hungry for grand syntheses when their knowledge is way too puny to support it. I think of the search for micro-foundations in economics as like this. Keynes’ micro-foundations were all that were necessary for macro-economics. That is one knows broadly what human beings are like – they’re fairly rational but also given to various other types of less rational, more emotional behaviour of various well known kinds.
You don’t need much more than that to investigate various macro-properties of aggregations of such agents. One doesn’t need optimising micro-foundations to tell the stories of macro-economics – bubbles, busts, liquidity traps, steady growth and so on.
But this kind of informality and improvisation was ruled out on grounds which related more to the certain aesthetics of the discipline as it was then coming to be practiced rather than on stricter scientific grounds of realism or usefulness. (Indeed there’s a truly strange little debate begun by Milton Friedman about whether or not you want your assumptions to be ‘realistic’ – but I’ll leave that to one side here.)
Thus it was simply presumed that micro-foundations could only be of an optimising form (even though as Herbert Simon demonstrated, theoretically and from observation, that human beings didn’t do much optimising – they followed rules of thumb).
In addition to this grand hubris, there’s an analogous kind of micro-hubris. It goes like this. Although practitioners are not after some grand theory, it becomes clear that important parts of the story are being left out of the discipline’s formal constructs. The old way of handling this – say Alfred Marshall’s or J.R. Hicks’ way of thinking of economies of scale in trade – is to say ‘these phenomena are both real and important. So they should not be forgotten, but they’re very difficult to model, so we’ll have to try to take them into account as we go – informally.’
But we’ve seen all sorts of strange concoctions in the last few decades that take economics in directions which are useless or postivitely strange. The example I know best is Strategic or New Trade theory. Here economists began modelling scale economies in trade beginning in the 1980s – indeed around about the time string theory got going!
Now no-one can blame anyone for having a crack at solving a formal problem (acommodating scale economies into trade theory) that had eluded economists previously – especially when there are some new formal modelling techniques around. But you’d think that prominent in their work would be some consideration of why it failed in the past and why it could be expected to be successful this time. If anyone can find any such serious consideration (say lasting substantially more than a couple of sentences) in any major milestone article on strategic trade theory in its first decade I’ll donate $100 to their nominated charity.
As it turned out, a few decades passed and nothing much emerged except stuff that was intuitively clear at the outset. Thus strategic trade theory offered itself as a comprehensive rebuilding of trade theory and ended up with a few commonsenscal conclusions – namely:
- that scale economies make the case free trade trickier to prove formally – by creating circumstances where policy makers can succeed in acting strategically; but
- scale economies also make trade itself (and so free trade) more important because with scale economies there are more gains to trade, more gains from specialisation.
In macro, here’s Krugman observing that, despite all the theoretical controversy and formal development of macroeconomics in the last few decades, the basic apparatus developed during Keynesian revolution from the late 30s through to the 1950s remains the most useful apparatus we have for thinking about macro-economic policy problems.
Another example is ‘rational expectations’. 2
Nothing wrong with trying to accommodate expectations into theory – that was obviously important. It’s that word ‘rational’ that is the problem. The way ‘rational expectations’ works is to insert the formal predictions of some theory into the expectations of the agents we are modelling. The problem is that the world is not like that. Agents have different theories and take different positions, so taking account of changing expectations in macro-markets is a bit like taking account of scale economies in trade you can do it, but you do such violence to what’s going on by doing so that the deprecated alternative of a more informal understanding of the phenomenon might be better.
Then there’s real business cycles, which are truly strange. Real business cycles generate macro-economic phenomena like booms and busts entirely from optimising rational micro-foundations. This usually requires certain parameters (like the income elasticity of labour supply) to be around ten times higher than they appear to those dull and worthy souls who go out into the empirical world to try to measure them. Real business cycles typically model recessions as driven by micro-economic changes (often changes in technology). When some technology shock comes along (say computer technology improves productivity and wages in some sectors and not others, forward looking people redo those labour/leisure optimisation calculations they’re always re-running to make sure things are still optimal (for instance your local bus driver). They then decide to take an unpaid holiday for a few years. Voila recessions explained. (I am not joking).
In a remarkably prescient and powerful piece, Adam Smith argued around two centuries before Thomas Kuhn made ‘paradigm shifts’ all the rage, that aesthetics drives science. He was right – but ‘epistemological bubbles’ illustrate the downside when the aesthetics go awry when they become a self contained echo-chamber rather than existing in a healthy dialectic with the world, and with practical usefulness.
- but[↩]
- Postscript, if Krugman is regarded as ‘left’ – something that might have surprised him a decade or so ago, then here’s (pdf) Greg Mankiw a Bush economist arguing pretty much the same thing.[↩]
“Quantum theory and the schsm in physics” (1982) could be helpful, with the argument that physics had run into trouble with some epistemological and metaphsical problems at the core of the enterprise.
Much the same applies in economics, especially as some of the same presuppositions are at play in each case.
http://oysterium.blogspot.com/2006/02/maths-invades-economics.html
Anyway it will be really helpful when someone finds the time to get the mop and bucket to work and do some serious housecleaning.
There’s a good discussion of the physics issues at Cosmic Variance, with contributions from Woit and Smolin, among some string theorists and other physicists, at http://cosmicvariance.com/2006/06/19/the-string-theory-backlash/ . String theory may or may not be the goods, but I don’t know that all bubbles are equally empty. Philip Ball’s new book “Critical Mass” goes over very similar ground to your post in relation to economics, and I haven’t finished it yet, but it seems he is advocating a blend of physics and economics, while recognising that economics will always be “harder” than physics, because it is based on people not particles or strings. I would be interested to hear your views if you have read that book.
Some thoughts inspired by this post…
[The URL is http://www.7gs.com/pharoz/?p=748 if the link above doesn’t work.]
Sounds like Kuhn has trumped Popper. Or in other words… Kuhnian science needs a healthy dose of Popperian medicine.
It seems that the avant garde of science (physics is the “paradigm” example, if you’ll excuse the pun) has long since travelled beyond the reach of popular, or even inter-disciplinary, understanding. Scientists at this level thus either achieve a kind of priestly status in the popular mind (I’m thinking particularly of Hawking… such physicians are often portrayed as the guardians of the basic truths of the universe… even though, or because we don’t understand fully what they are on about), or they are unjustly dismissed as engaged in exercizes qualitatively no different from anyone else’s (vulgar postmodernism).
It is imperative that the scientific community regulates itself and holds itself true in a way that preserves its own standards and freedom; otherwise it will be unjustly lionized or demonized from the outside.
That said, I think that the objective of string theorists is noble and their efforts should not cease simply because we don’t even know if it is wrong. The problem is that the nature of phenomena at the level which they deal is not yet observable or measurable empirically (as far as I know). One can only “demonstrate” the nature of such a level analytically through mathematics. It seems to hover in a blurry limbo between physics and metaphysics.
Great post. Not being a physicist either I can’t know if string theory really is a dead end, but I suspect it is too. More than anything I really like this term “epistemological bubble”. It perfectly describes how knowledge appears to be – and indeed IS for a time – swelling in a certain direction, but in a swift realisation/stroke of brilliance/accidental discovery the swelling bursts to reveal nothing of substance.
These four related references deal with the relationship between various paradigms and their influence on human culture altogether.
1. http://www.dabase.net/christmc2.htm Einstein meets Jesus
2. http://www.dabase.net/spacetim.htm Space-Time IS Love-Bliss
3. http://www.daplastique.com Space-Time Is Love-Bliss communicated via photography
4. http://www.aboutadidam.org/lesser_alternatives/scientific_materialism/index.html
Cliff wrote “Sounds like Kuhn has trumped Popper. Or in other words”
I don’t know that “string theory” is a dead dog yet. After all, Newtonian physics was around a while before Albert. And Newtonian physics is still good for calculating a lot of common sense problems. And a lot of people still seem to be pushing string theory quite hard.
But it is so hard for an outsider (and I suspect even for an insider sometimes) to comprehend some of the reasoning. Every time I look at a book about quantum physics, the limits of mathematics, the nature of infinity, the start of the universe, I find myself getting frustrated by the density of the arguments. I follow (I think) the stuff about how light must both wave like and particle like, and I suppose I follow the idea that (at least at very tiny levels) the act of observing affects the behaviour of particles, but after that my brain turns into vegemite.
Take the theory that there are ten dimensions, but that in this particular part of the universe, or this particular alternative universe, six of them cannot be perceived. Does that mean that in some sense they “look” like length, or breadth or (gulp) time? How? or is it just a metaphor for some other type of quality?
Similarly, I can understand that there might not have always been time, but (obviously, sorry) that sentence contains a temporal assumption which is that there was a “before” ie a time based concept – before there was time.
I suspect that some of this is just the limits of language, but it makes it hard to work out what is a statement of fact and what is a metaphor.
Having said all that, I accept that quantum theory allows all sorts of scientific developments, I just wish someone would explain it all to me clearly. I understand that the idea that a lawyer was asking for clear explanation will give rise to a degree of mirth, not unmixed with schadenfreude, but there it is.
I dont know that Kuhn trumps Popper or vice versa. I think that what might happen is that the Popper falsification idea works quite well in a given set of problems (oh all right a given paradigm) but that it doesnt go on seamlessly forever. there’s a popping noise, the bubble bursts as the set of assumptions starts to collapse under its own weight or in the light of unreconcilable observed data or mathematical operations, and then you go Poppering away inside the next intellectual universe.
Having said that, I suspect it is a lot more messy really. My guess is that most development in any area is a bit of trying to reinforce a theory, a bit of trying to falsify it, attempts to explain it in the light of various meta-theories and then a somewhat messy resolution for the time being.
According to Penrose, scientific insights drop into the minds of the gifted while they are waiting for a bus. At the other extreme, we large teams of underpaid post-docs working in pharmaceutical firms under the tyranny of the research grant entrepreneur who heads the laboratory, often more interested in establishing defensive patents than real discovery. Yet, the human genome slowly reveals itself. Science is a many splendored and chaotic thing. The scientific community is at times a granfalloon and at others a karass.
You know, Newton and the lads managed to push science from alchemy to quantam mechanics without ever having heard of Popper or Kuhn. While the bare fact of the success of science is intriguing, do we really need a theory of science any more than we need a theory of music? The human mind seems to navigate the musical and scientific landscapes with its own non-logical compass.
….Oh dear. Maybe I shouldn’t have looked at those links that John posted above?!…But the fact is that nobody knows why science has been successful since no-one knows why simplicity or math or aesthetics should be a guide to truth. The string guys are at least sticking with math and aesthetics (e.g.) but seem to have left simplicity behind.
BTW: In the next census this coming August, I will again be listing my religion as Bokonism, hoping to overtake those arrogant Jedi as the fastest growing Oz religion.
Very interesting post Nicholas. And you’ve managed to skewer most of the big economic ideas that have come down to the laity in some form from the Olympian heights of academe in the last couple of decades.
It also nicely segues into your Paul Monk post as his review begins with a discussion of Sokal and Bricmont’s pricking of a few “post modern” epistemological bubbles.
Rafe, I don’t think you are right that Kuhn did not give us a philosophy of science but a only a sociology”. He certainly expounded the former, which had sociological aspects. His philososphy, unfortunately, also comes with constructivist aspects that are highly problematic on various interpretations.
Nice comments Richard. As a lawyer too, I can empathize. We’ve probably read a few similar books. And I agree with you about Popper that he doesn’t give a convincing picture of the actual practice of science. Nor does Kuhn for that matter.
On compactification, I don’t koow how to visualize them. One probably can’t. Like trying to picture higher dimensional topological objects I suppose. But the extra dimensions are supposed to have indirect physical effects notwithstanding that they are too small to observe. A good popular account of this is Brian Greene’s “The Elegant Universe”.
“And I agree with you about Popper that he doesn’t give a convincing picture of the actual practice of science.”
Gaby, have you actually read anything that Popper wrote, or do you just reycle fragments of criticism put about by other people?
What do you think is wrong with his ideas about
a) the practice of science?
b) epistemology?
c) methodology?
Be sure to find out what he actually wrote so your critique relates to his ideas and not just to figments of someone’s imagination.
About the only thing I remember about Popper is “falsification” and about Kuhn is “paradigm shift” and I only remember that from some economics rather than physics class at uni in the dark dark past…but I enjoyed this post Nicholas. I was now planning on saying something profound but too much tennis and soccer in between planes and automobiles has fried my brain. (Well actually that’s no excuse, I’m just content, as always, splashing about the shallows)
Helpful comment from Catallaxy
http://badanalysis.com/catallaxy/index.php?p=1882&cp=5#comment-120690
“Most critiques of Popper are either, essentially, sociology in disguise. They argue that there’s more to our current scientific practice than Popper specifies. This may be true. But they have no arguments that these details are necessary rather than contingent facts about our science, or that science couldn’t be otherwise and yet still be science.”
“Or, they are analytical philosophers showing that falsificationist logic can’t vouchsafe you any certainties beyond those vouchsafed by verificationism. What they miss, is that Popperians don’t expect to have this kind of certainty. All Popper gives is a philosophical framework in which it makes sense to pay attention to evidence rather than to ignore it. It explains why science needs evidence even if induction doesn’t work.”
Thx for the link Rafe. The whole comment on Catallaxy is an interesting one. I’m a bit suss on this line though.
Thanks Nicholas. As I have said before: a scientist who operates in a critical and imaginative manner probably doesn’t need to read Popper unless they need to counter unhelpful ideas that are propagated by people who are drawing on other philosophers such as Lakatos, Kuhn and Feyerabend (or Stove).
I don’t want to be stuck with the job of defending Popper ad nauseum, I just insist that people need to understand stuff before they rubbish it.
As to the dispute between Popper and Feyerabend on demarcation, I don’t know what telling blows Feyerabend landed. Certainly Popper was over-committed to defending his ideas on demarcation but that did not make them wrong, it just meant that he wasted a lot of time in arguments that didn’t help anyone very much.
What precisely was Popper’s mistake on demarcation that Feyerabend corrected? This is not asked in a hostile or aggressive tone, I just need to get some clarification.
I don’t take your comment as hostile Rafe.
As I see it, Popper’s mistake is where he starts – the idea that science needs rational criteria for demarcation from non-science.
Popper explains this as his starting point – for instance in his autobiography. As I recall Feyerabend argues – pretty convincingly – that the project, tempting as it is, doesn’t work – that one cannot produce specific criteria or rules that give you the demarcation.
This has always been taken by F’s opponents to be a lurching towards irrationality – whereas it’s just where rationality takes one – as it took Zeno with his paradoxes and Kant with his antinomies and Godel with his incompleteness theorem.
Attacking Feyerabend’s attack on demarcation as obscurantist and irrational makes as much sense as saying that Ken Arrow was trying to undermine social decision making with his impossibility theorem.
As I said earlier, I can’t see how Popper advances the cause much beyond the American pragmatists did in the mid to late 19th century. In fact as Peter Singer argues, there’s a great irony in Popper that, having started out attacking Hegel (by comprehensively misunderstanding him – not hard by the way), he ended up in positions that had been foreshadowed by post-Hegelians (like the pragmatists) and by Hegel himself.
I’d like to take up the other aspect of Phil Jones’s quote to the Feyerabend aspect that Nicholas has commented on.
I don’t understand why arguments are required concerning “necessary facts” about science. This smacks to be of a naive analyticism. One of the aims of a philosophy of science must be to explain the practice of good science. I can’t see why such propositions have to be “necessary”, i.e., true in all possible worlds.
Secondly, analytical philosophers and philosophical naturalists have left the search for “certain” foundations to knowledge behind decades ago. I’m also not sure what “verificationism” vouchsafes in this context given that I understand it as a semantic notion giving a criterion of meaninfulness. But it may be intended to refer to the “confirmation” of hypotheses. I’m just not sure.
I don’t think the 2 possibilities given are exhaustive either. There is at least a third possibility and that is that it is not always a theoretical statement that gets “falsified” but possibly any of a number of auxiliary theories, background assumptions, empirical generalizations etc. involved in testing and falsifying an hypothesis.