Review: Drift into Failure

While having lunch with Ken Parish last week, I chatted a bit about a very long book review I wrote a few weeks ago, published on my personal blog. He asked me to cross-post it to Troppo. Enjoy.

Drift into Failure, by Sidney Dekker, is one of the most thought-provoking books I’ve read in a while.

“Thought provoking” is usually a shorthand used by buttered-up friends of the author to mean “I agree” or “he/she provided a great blurb for my dust jacket and now I’m returning the favour”.

But in this case, I found that the book provoked a lot of thought on my part. It tied to a lot of other books I’ve read in the past year or so, some of which I’ll name check.

So … what’s it about?

Dekker discusses how complex systems ‘fail’ in unforeseen ways. He characterises some of these failures as ‘drifts’. The system didn’t visibly zoom towards failure; there was no massive perturbation, no onrushing catastrophe, not even dark clouds on the horizon. In a drift-failure, the failure just happens, and only afterwards is there any chance of diagnosing the whys and hows.

Drift essentially crosses two fields of work. The first is reliability / failure studies and the second is complex systems. I’m not very familiar with reliability studies except through a Chinese-whispers version that has been transmitted via software operations literature. I feel that I have a more-than-nodding acquaintance with systems theory through a uni course and my own reading in that area.

To a reader unfamiliar with either body of thought, this book might be a bit difficult. Dekker isn’t really addressing the book to the layperson, it’s really addressed to practitioners reliability/failure field. Dekker’s ultimate hypothesis is that a “Newtonian-Cartesian” approach to failure does not and cannot address failures in complex systems.

If you’re not from the reliability field, Dekker’s writing is a bit like being an atheist at a theological debate. Interesting, but a little hard to follow in parts. But boy does he have lots of points to make.

I respectfully disagree

I don’t think Dekker quite nails his case down. For the rest of the review I will try to explain why. Hang on, because it’s a long, circuitous ride.

Postmodernism

As I said above, Dekker posits that a Newtonian-Cartesian worldview can’t explain or predict failures in complex systems. Of most concern for yours truly is that, in addition to reaching for complex systems theory, he reaches out for postmodernism. I’m not a particular fan of postmodernism — I think that some of its insights can be usefully appropriated into modernist thinking, but its universalist claims are dangerously nigh to total bunkum. I don’t think Dekker needed it.

Dekker uses postmodernism to posit that failure is a negotiated label. A system isn’t “failed” until after a failure, and the very concept of failure is constructed as an agreement between observers and participants of the system. Hence: failure is subjective.

Well, yes. Certainly, failure is, after a fashion, transmitted backwards in time. But many of the systems humans build are purposive. The purpose is known ahead of time, in advance. Even before further negotiation between subjects take place, many failures are instantly recognisable as failures.

Local optimality, global optimality and failure

Dekker chose the “drift” metaphor because a system arrives at failure in small, locally-rational steps. In one case study, he examines Alaska Airlines flight 261 in great detail. In this case study, a series of small relaxations on safety standards eventually lead to a catastrophic system failure (sudden, unpredictable loss of human life).

Dekker asks: when did the system fail?

  • Did it fail when the particular acme nuts failed?
  • When maintenance was not performed?
  • When times between scheduled maintenances were extended?
  • When the design was made without accounting for the possibility of the above?

This goes back to the distinction between proximal and ultimate causes, popular amongst both reliability studies practitioners and lawyers. The proximal cause is clearly the acme nuts failing … but in this case, Dekker says, where is the ultimate cause? It’s diffused across the entire system, across a series of locally optimal solutions.

Local and global optimality is a classic human problem. In Daniel Kahneman’s excellent book Thinking, Fast and Slow he metaphorically describes different ‘selves’. One self is a fast, almost subconscious self; an intuitive rationaliser. It excels at locally optimal solutions. A second ‘consciously rational’ self must be aroused purposefully. “Math is hard”, as Barbie says, so let’s go shopping. Hence we almost never actually engage that second self, even in situations where we think we have. Kahneman includes lots of little fiendish self-tests for the reader that abundantly prove his case.

Once you see the distinction between the locally optimal and the globally optimal, cases jump out of the woodwork everywhere you look. It’s funny, because I learnt the concept of local/global optima at university but never really clicked to it before reading Kahneman.

And as with optimality, so too rationality. What is rational locally may transpire to have irrational global consequences. Little agents optimising their corner of a large system can cause failed systems. Part of Dekker’s broader hypothesis is that assigning blame is a bit rich in such circumstances — everyone was just acting according to sensible rules within their own situation. It’s all so complicated, give them a break.

I’m not so sure. Take for example the question: “when was the system in a failed state?”

By itself, that question supposes a binary logic. The system IS in a failed state, OR the system IS NOT in a failed state. Dekker sees what anyone can see as a bit of a nonsense and pushes it downwards to our notion of blame. I prefer to look at it and push it up to a narrow conception of logic.

To explain what I mean, I need to make two diversions.

Diversion I: Fuzzy Logic

Here’s where fuzzy logic pops in (and also where, based on the title of this subsection, I lose both of the readers who got this far without giving up out of boredom).

The core insight of fuzzy logic is that we can think of things as belonging to “fuzzy sets”. In normal logic, sets are cut-and-dried. Remember Venn Diagrams? They all looked like this:

Look at all that sharp delineation! Any “thing” in that diagram indisputably in one of five possible states:

  • Blue
  • Yellow
  • Blue AND Yellow
  • Blue OR Yellow
  • NEITHER Blue NOR Yellow

Traditionally we ignore that last condition — the neither/nor — because that way we get a neat formula for calculating the possible number of states for any number of exclusive sets or logical variables.

There’s a lot to like about conventional logic. It’s the granite foundations of the field I hold a degree in — Computer Science. Given ANDs, ORs, NOTs and some ones and zeros, one can build essentially infinitely complex systems (I’ll return to this point later on).

But it doesn’t actually describe a heap of common problems in the actual world.

And speaking of heaps — here’s a classic philosophy question: is this a pile of sand?

Well yes. And if I remove a grain? Still yes. In computer science terms I’ve performed an inductive step, it’s now “turtles all the way down”. The pile of sand is always a pile of sand, perhaps until I remove the last grain.

But we know that’s not “true”, in the every day sense. A few grains of sand does not a pile make. And it gets worse, for when does the pile become a dune? And when does the dune become a desert?

Fuzzy logic sidesteps the issue by saying that the pile of sand has a degree to which it is a pile of sand. This is expressed with a “membership function”. To what degree does this pile of sand belong to the set of all piles of sand? Well in this case, I think we can all agree that it’s a pile of sand, so we grant it a high membership and say it’s a member of that set to a degree of 0.9.

When it gets small, we lower its membership degree. A few handfuls of sand might only rate 0.05 in the membership function. And as it grows very large, its membership degree again shrinks to a low number, even as its membership of the set of dunes grows larger.

Hence in Venn diagram terms, fuzzy sets look a bit more like this:

It’s … fuzzy, as you’d expect. Membership in the blue and yellow sets is not a binary proposition, there are degrees of membership.

Diversion II: Phase Space

Why do we care about sets all of a sudden? Because sets are one way to represent systems. More accurately, any given system has many states, and states can be grouped in various ways as sets.

First, let’s look at a very simple system: a switch. It has two possible states, on and off. The system can be described with a graph, like so:

This is a phase space, a space of all possible states of the system. The phase space diagram here is simple. It has one axis — one dimension — because the system only has one controlling variable. It has two states — two coordinates in phase space — because it’s a discrete binary variable. The switch is on or off. That’s it.

Systems of interest are, unsurprisingly, more complex than that.

Suppose now we have a control panel with one dial. It controls a vent which emits cold air. Next to the dial is a temperature gauge. The dials and gauge are wired to a room which you cannot directly observe. Your job is to reach a certain temperature.

A phase space diagram here would have two axes: one for the dial and one for the temperature. You need both axes to fully describe the configuration of a system at any given point in time.

What does that look like? A bit like this (warning, unsexy diagram):

Now suppose you twiddle the dial. You have changed the configuration of the system — you’ve moved through phase space to a new set of coordinates. We draw a line on the diagram to represent that:

After a while, the temperature falls:

Not the most stunning of diagrams, I grant you. But this is broadly how phase diagrams work. The line is implied to be a span of time, the points are particular configurations of the system.

So when is a system in a state of failure?

Dekker says that systems drift and that from inside the system, such drift isn’t visible until the failure occurs. But we still try to back track to discover “causes”, even when it might make no sense to.

Suppose a 2-variable system drifts to failure:

Dekker posits that in the Newtonian-Cartesian paradigm, we aim to trace that line backwards in time to discover who and what failed. But this is insensible, says Dekker, because in fact the causes can be so diffused over the entire system and not individuals or components.

The “Newtonian-Cartesian mindset”

Dekker decries the “Newtonian-Cartesian” mindset of trying to find discrete causes for failure. Instead each step can be sensible in itself, or causes too diffuse to tease out, or insufficient information to work it out.

I don’t think that Dekker really refutes N-C mindset at all. Just because a step was locally, but not globally optimal, doesn’t excuse it. If global reasoning was available, it should be exercised. Causes that are diffuse are still causes. Causes that can’t be detected due to lack of evidence or lack of instruments can still be considered causes (“hidden variables”, in physics parlance).

But Dekker wants to excuse a lot of such cases because, he posits, the Newtonian-Cartesian paradigm is itself broken.

I don’t think he proves his case. Worse still, he handwaves a lot of the rough edges of his argument away. Complex systems are hard to govern, he says. Why are they hard to govern? Because they’re complex. It’s a circular logic.

Ultimately Dekker’s logic relies on the incomplete conception of logic I gave above. In Dekker’s conception, a system is or is not failed. The observable paradoxes of meaning that this generates are then resolved by slapping a “warning: complex system!” tag on it, plus a dose of postmodern voodoo.

What I propose as an alternative is that systems have degrees of failure. Just because, in the every day sense, they have not “failed”, nevertheless within the phase space are fuzzy sets of states that represent all possible failures. And every state in the phase space has some degree of membership in each of those failure states. It might look like this:

(An alternative rendering would be to add the “disaster” membership degree as another axis, but my graphics skills extend only so far).

Going back to Alaska Air Flight 261, when the plane crashed, the aviation safety system was obviously belonged to the “tragedy” failure set to a 1.0 degree. But before the crash, its degree of membership in that set grew steadily as the system drifted towards it.

My formulation does not excuse actors and components in a complex system. They are, where any degree of global insight is possible, still on the hook.

Legalism and Realism

Dekker describes the hunt for a single person or component causing a failure is something he describes as being a “legal view” of things. Which is funny, because lawyers have been grappling with complex systems for thousands of years. They’ve got some tricks up their sleeves.

One debate amongst lawyers is in what role a judge should play. One classic doctrine is “Legalism”, most famously demanded by Sir Owen Dixon during his tenure as Chief Justice of the High Court of Australia:

Close adherence to legal reasoning is the only way to maintain the confidence of all parties in federal conflicts. It may be that the court is thought to be excessively legalistic. I should be sorry to think that it is anything else. There is no safer guide to judicial decisions in great conflict than strict and complete legalism.

Legalism meant that, in considering a case, judges should strive to ignore all considerations but the law. This is, in a strict sense, impossible. The world is too mixed up in the law, the law to mingled with the affairs of the world. Judges are mere humans; a sea of passions with a few stony outcrops of reason. Legalism is, like Newton’s laws stretched to their limits, strictly impossible.

That last argument leads us to Realism, which basically says: judges are biased. Judges make law, in practice. Get used to it.

But the funny thing is that, when we zoom out, which better serves society at large? I would personally argue legalism, imposing as it does much lower uncertainty costs and politicking costs on society at large. And that was Sir Owen’s point. The loosey-goosey “broadness” of Realism turns out, upon closer inspection, to be founded on a narrower view of society than Legalism. The Legalist embraces an important impossibility because it serves a higher good.

My analogy here is that Dekker is poo-pooing the analogical Legalism of the Newtonian-Cartesian world view — that causes can be ultimate derived from computation and analysis — in favour of a kind of Realism. Systems are complex, he says. Get used to it.

But like the Realists, his analysis is too narrow. Even if he is right (and I think he is only half right, as I will go on to say below), his postmodern / complex system view nurses dangerous seeds. Embracing the concept that there is always a cause or a set of causes leads to better systems, even if it isn’t true.

What is a Complex System, anyhow?

Dekker never really makes this clear, perhaps because he lacks the fuzzy logic terminology to point out that it’s a matter of degree.

I suggest that a “complex” system is any system which successfully confounds human understanding. That’s a fuzzy statement already: which human? What counts as confounding? What counts as understanding? But if we accept the fuzzy logic worldview, it’s less of a problem. Systems will belong to the “complex systems” set with a different level of degree.

But I suggest that there is no qualitative change. It’s just that some problems are too big for humans. Some problems are too big for any computational device, as computer science has discovered — some problems cannot be solved at all by a computing device; some can’t be solved before the heat death of the universe.

But suppose availability of sufficiently advanced hypercomputer (or more quaintly, a god). What could it predict? How deep a system? What level of complexity? Newtonian — really Einsteinian — physics breaks down at the limit because of the uncertainty principle. But supposing it could be done, would this universe be predictable?

I think so. And that’s the most complex conceivable system there is — ie, the System of Everything. No qualitative shift has occurred. It’s a matter of (very, very, very large) quantitative differences.

So in fact “complex” systems are a human phenomenon, a label given to things that exceed 1) our ability to observe and 2) our ability to compute.

Epistemological Confusion

Dekker’s contest between the Newtonian-Cartesian vs Complex-Postmodern worldviews
is really akin to the debate of atheism vs agnosticism.

Newtonian-Cartesianism says “this is reality, this is what is objective” — it’s a statement of belief. Postmodernism/Complexitism is “it’s unknowable, it’s constructed between subjects, it can’t realistically be done that way”. That’s a statement of epistemology, about what is knowable.

But these are talking past each other. Reality is, in a sense, both. There’s an objective reality, broadly a Newtonian-Cartesian reality at the humanly experienceable macroscale. And there’s our understanding of that reality. In a sense complexity just means “intractably difficult to compute”. Dekker has confused a statement of fact (“the world is not Newtonian-Cartesian at the macroscale”) with a statement of epistemology (“the world is not truly knowable at a complex scale”).

To me, a mechanistic universe does not preclude complexity, it predicts it. I can only imagine that a non-mechanistic universe would have no emergent phenomena and would resemble mere randomness. A non-mechanistic universe is entropic in an information-theoretic sense. No information arises from it, and therefore any claims of complexity are meaningless in a postmodern sense.

For example, the mechanistic nature of computers (Turing machines) belies the experienced complexity of modern computer systems. Alan Turing wrote a paper to discuss an important mathematical question, and as a side-effect invented one pillar of the modern world. At the basic level Turing’s hypothetical machine is extremely simple: a tape, a tape reader, a pen and some agreed symbols that can be read or written on the tape. Modern computers, at their most basic and fundamental level, still resemble a pastiche of the Turing machine.

Yet from this very modest little well springs a fountain of complexity. Modern software systems are stupendously complex. Failure is their normal condition; trying to exhaustively test every combination of factors is so vast a task that it is laughed out of polite company. Yet we can test the common cases. Better yet, with some deft mathematical footwork we can simply eliminate whole swathes of phase space from consideration. This is the Newtonian-Cartesian paradigm at work, busily mending its own fences.

Should you read this book?

Yes, I think so. But critically. Dekker’s book makes fascinating reading and I greatly enjoyed it. I may have attacked it here, but that’s only because I think he fell short of elucidating and proving his case. A fine book can still be a fine book even if its contents or conclusions are, in one’s own opinion, wrong (cf. Plato’s Republic).

This entry was posted in Review. Bookmark the permalink.

25 Responses to Review: Drift into Failure

  1. Paul Frijters says:

    Hi Jacques,

    interesting read. What you essentially argue is that the tools of what you call ‘Newtonian-Cartesian thinking’ (causality, proximate and hidden causes) is a process useful of handling complex situations even though they do no perfectly describe and predict it. As such you say this guy, who wants to abandon NC thinking but does not offer a practical alternative, is pretentious and useless.

    Similar debates come up in many areas of economics. The problem is invariably that to practitioners usefulness is the key motivation. Yet, to others,aesthetics is important for its own sake and to them, having no alternative or failing to see the usefulness of something imperfect is a secondary issue: for them, the future will take care of itself and they will insist on something beautiful.

    I am one of those who thinks if someone can only criticize and not offer an alternative, he is not worth reading.

  2. fxh says:

    Good stuff J.

    The reason why legal Royal Commission type approach to The Bushfires, or anything, is usually next to useless in reality. But great for the legal professions, newspapers, politicians and knee jerk blamers.

  3. john walker says:

    We are part of the world , our understanding of of “our understanding” of the world involves self-reference not “truly knowable” should simply read not completely knowable,no?

  4. FDB says:

    More Jacques, more often please!

    Very interesting review, although I’ve not read the book. I think our views on postmodernism, or at least any attempt to apply postmodern theory to aid understanding (or do anything useful), would be pretty close. And not, as has been the frequent accusation, because I haven’t read or tried to understand it. I’ve been down the rabbit hole, and I happen to think it’s a blind alley.

    The idea that system is complex, so it’s useless trying to analyse it rationally seems to come from the same poisoned well as the notion that one reading of a text is as good as another – death of the author and all that. That because we all bring ourselves, with our personal and cultural histories and proclivities into our reading, it doesn’t really matter what the author intended to achieve.

    What bollocks. Just because no two readings are identical doesn’t mean there is no value in getting close to what the author meant. And in this case, just because a perfect understanding of the system is beyond us doesn’t mean we can afford to throw our hands up and say let’s not bother trying to burrow into that complexity as far as we can.

    I really hope Dekker doesn’t work at Boeing, if that really is what he’s suggesting.

  5. john walker says:

    Jacques
    Agree

    Hofstadter’s GEB chapter XX on tangled hierarchies is pretty interesting (and quite rational) on the nature of complex systems of representation of representation ..(GEB has been my ‘art’ theory book for decades).

    FDB Hofstadter in a essay (on nonsense poetry) quite rightly described a exemplary piece of Post modern art theory writing as : syntax without any semantics I.e Nonsense verse.

  6. FDB says:

    syntax without any semantics

    Yep, it’s anything goes, when there’s nothing to lose, and nothing to prove.

  7. meika says:

    Agree. Glad you’ve explained fuzzy logic to Ken. (Can I have a job now?)(Or, at least now, can I have a better job.)

    And yes John Walker, it is Hofstadter’s GEB argument too.

    Recently I’ve been using ‘legalistic’ as a swear word. This comes from experience on a Lacrosse Association board nearly two decades ago. There I observed successive series argument/discussions between Uni friends, one a lawyer, and one a doctor, about insurance and the chance of likely injury on the Lacrosse field playing non-contact “lady’s lacrosse”. The doctor was all statistics (not to be confused with fuzzy logic at all) and the lawyer all legalistic. One based on reality as probability, one based on a socially constructed legalism on what to do about X. One said it was a waste of money, the other saying it was our duty. The “nudge” politics won the day.

    This allows me to mentioned my other favourite Ken.

    Ken MacLeod’s new novel Intrusion is a new kind of dystopian novel: a vision of a near future “benevolent dictatorship” run by Tony Blair-style technocrats who believe freedom isn’t the right to choose, it’s the right to have the government decide what you would choose, if only you knew what they knew.

    The premise is entirely legalistic.

  8. Nabakov says:

    Yo FXH, check your facebook messages.

  9. Tel says:

    “syntax without any semantics”

    Very much so. If I ask, “What’s the best size hammer to recharge a mobile phone?” it’s a perfectly well defined question, with no useful answer except to say you are probably asking the wrong question.

    “… when does the pile become a dune? And when does the dune become a desert?”

    Measure the mass of the sand. We already have the conceptual tools for the job, but “a pile of” sand is poorly defined and colloquial, while “a kilo of” sand is not a fuzzy concept, and if you remove a grain from the kilo you have one grain less than a kilo. Indeed, people have gone to extraordinary efforts to make sure this works as advertised (and quite possibly there is some deep reason in the universe why it does NOT work as advertised ever time, but we haven’t stumbled into that yet).

    Regarding aircraft accidents, we do have a problem with that one, but not the conceptual problem that Dekker describes. The problem is lack of observability. The standard tools for handling risk are statistical tools, and they only work properly with largish sample sizes (or check the various statistical confidence tests if you want a better understanding). Thus, for example, motor vehicle accidents are also highly complex systems, depending on large numbers of variables, and yet insurance companies do a very good job of accurately putting a price on that risk. Similarly, a Global Financial Crash is a also complex system, depending on large numbers of variables, but insurance companies have absolutely no idea how to put a price on such risk.

    The difference is that we have huge piles of data taken from many motor vehicle accidents, but relatively few financial meltdowns.

  10. Tel says:

    “I can only imagine that a non-mechanistic universe would have no emergent phenomena and would resemble mere randomness. A non-mechanistic universe is entropic in an information-theoretic sense. No information arises from it, and therefore any claims of complexity are meaningless in a postmodern sense.”

    You aren’t the only one who has a problem with the implication of this. You are going to have to work pretty hard to make people abandon quantum mechanics though.

    http://www.theonion.com/articles/christian-right-lobbies-to-overturn-second-law-of,281/

    Have you considered that perhaps God does have a plan?

    Maybe it’s a fixed-term repayment plan…

  11. Hi Jacques,

    Thanks for the long and entertaining review. I do think that Dekker’s failure to define complex system adequately seems a key shortcoming of the book.

    To me, the key is to understand that complex systems are fundamentally indeterministic. A while ago, Joe Firestone explained why to me using our current understanding of quantum mechanics: Current theory suggests that complex systems are deterministic on a multiverse level, but indeterministic from the point of view of any single universe. This is probably the basis for Dekker asserting that N-C entailment does not apply.

    It is common to conflate this kind of “complex system” with “extremely complicated deterministic systems”, and the confusion leads to many misconceptions and bad lines of argument.

    I’ve focused primarily on a type of complex system in my study and application of Knowledge Management – complex adaptive systems (CAS). Any system involving humans is fundamentally a CAS, although the inverse proposition is not necessarily true.

    To answer Paul’s challenge in this context, the reason why N-C cause and effect is dangerous when analysing complex systems is that it is the origin of the Command and Control management strategy. Believing N-C applies to complex systems means we believe we can achieve a particularly outcome “if we only had better tools to control them”. But we can’t.

    Instead, successfully dealing with complex systems means adopting a whole new toolkit that relies upon managing concepts like propensities and coherent outcomes. One of the major ideas within a CAS is the use of iterative decision execution cycles – PDCA, OODA, and the like. Another key concept is the idea of safe-fail experimentation to explore possible system outcomes in a recoverable manner.

  12. fxh says:

    Nabs

    Facebook what’s that?

  13. “Just because a step was locally, but not globally optimal, doesn’t excuse it. If global reasoning was available, it should exercised”

    Should BE exercised?

    V. enjoyable post btw. In one of Ayn Rand’s “novels” (I forget which one, I think Atlas Snickered) there’s a train accident with multiple causes. As befits her “philosophy”, Ms Rand blames t’ guvnmint.

  14. emess says:

    Hi, interesting read.

    In complex adaptive systems theory, there is still quite a bit of argumentation about what a complex adaptive system might be defined as. (People can agree that systems like ant and bee hives and the stock market are such systems, but like pornography, we know it when we see it, but defining it is difficult). What made you decide to not go for at least one of the several alternative existing definitions out there?

    I also have some doubts of the claim that complex systems may not be understandable. For example, an ant’s nest may be complex if organised by a human (think of all the public servants needed to run the government…hehehe, or the possibilities for logistics firms like DHL), but the system runs itself with relatively few commands and extremely primitive ‘almost brains’ in its operatives. ie with very simple rules, one can generate self sustaining communities in the millions. Such systems are normally internally co-operative, but externally competitive. Now with an ant society with little internal brain, that internal cooperation is almost total, and its aggression/competition similarly – so probably not fuzzy. However, with human societal systems (stock market for example) there is a great deal of fuzziness about the degree of cooperation and competition both internal to a particular part of society. My suggestion is that not only do you have to think about the fuzziness of a system, but the degree of that fuzziness depends on the degree of cooperation/competition within the system. Looked at from this point of view, questions of what degree of competition and/or cooperation exist, also determine the nature of the system and whether or not it fails. I see that the competition and cooperation are barriers/linkages between the local and the global issues you have raised. Competition is something I see as connected to the more local, and cooperation to the global. The question is, where is the optimal setting? What are the means whereby we can set those connections to achieve that optimum?

    Next thing is that sometimes the system itself doesn’t fail, but the environment in which it is operating changes so that the perfectly operating system becomes irrelevant. For example the system of posting horses through the UK in the 19th century worked well until the advent of the motor car.

    Now, I do want to sound pedantic. To say Sir Dixon is wrong. The form is Sir {First name} {Surname} or Sir {First Name}…never ever ever Sir {Surname}. Normally I have no problem with Americanisms, but since they don’t have knighthoods an Americanism simply aint right.

  15. john walker says:

    from new scientist
    “In his book Not Exactly, Kees van Deemter argues that the very foundations of science don’t come in black and white.

    Forgive the oxymoron, but how do you define vagueness?

    A vague concept allows borderline cases. The potential confusion is that people think vagueness is when they don’t quite get what someone means. For people in my area of logic, it’s actually a much narrower phenomenon, such as the word “grey”. Some birds are clearly grey, some are clearly not, while others are somewhere in between. The fact that such birds exist makes “grey” a vague concept. The vagueness does not arise from insufficient information: some concepts are fundamentally vague.”

    The introduction of Representations of Zero and Infinity is quite recent in the west (1400 – 1600s).

  16. The problem is invariably that to practitioners usefulness is the key motivation. Yet, to others,aesthetics is important for its own sake and to them, having no alternative or failing to see the usefulness of something imperfect is a secondary issue

    Yes, absolutely. That’s why I mentioned judges embracing the “impossible” legalism in order to serve a higher good. In software engineering we have similar arguments about the ideal and the actual state of the craft. A famous essay says that “Worse is Better“.

    Mind you, path dependency is a confounding factor in discussing such matters in software. Once a technology is sufficiently entrenched, all its successors must pay obeisance or be cast into the outer darkness, no matter what their actual merits might be.

  17. And in this case, just because a perfect understanding of the system is beyond us doesn’t mean we can afford to throw our hands up and say let’s not bother trying to burrow into that complexity as far as we can.

    Agreed. Intellectual immodesty has done more for humanity than knowing our place ever did.

  18. Measure the mass of the sand. We already have the conceptual tools for the job, but “a pile of” sand is poorly defined and colloquial, while “a kilo of” sand is not a fuzzy concept, and if you remove a grain from the kilo you have one grain less than a kilo.

    The point is that a lot of things in the real world defy discrete definition. It might be useful to know what is and isn’t a pile of sand, and saying that “a pile of sand weighs 4.4kg or more” is basically picking an arbitrary line.

    I found fuzzy logic immediately interesting because of my background in databases. It’s not unusual to define hard boundaries for classifying data, even where it might not be sensible to do so. But the computer demands a precise number or rule, so we grant one.

    Doing so actually destroys information to create data. The nice thing about fuzzy logic is that it allows one to retain the content of “linguistic variables” without also giving up data and symbol manipulation.

    A good example of the destruction of information is double-entry bookkeeping. A lot of times what is entered is a single-point estimate of something. The concept of uncertainty does not follow the datum into the system, meaning that the aggregated figures are invariably wrong. In theory they’re a close total approximation of a complex system, but I personally think it’d be nice to have some confidence bars on budgets and balance sheets.

  19. To me, the key is to understand that complex systems are fundamentally indeterministic.

    Note my careful qualification of “macroscale” :D

    That said, Newtonian physics only really breaks down at the limits. In the history of humanity I very much doubt that a quantum uncertainty has had an impact distinguishable from a chaotic deterministic system.

    To answer Paul’s challenge in this context, the reason why N-C cause and effect is dangerous when analysing complex systems is that it is the origin of the Command and Control management strategy. Believing N-C applies to complex systems means we believe we can achieve a particularly outcome “if we only had better tools to control them”. But we can’t.

    My larger point is that it’s better to try and fail to tame the complex system than to never try at all.

  20. Now, I do want to sound pedantic. To say Sir Dixon is wrong. The form is Sir {First name} {Surname} or Sir {First Name}…never ever ever Sir {Surname}

    Thanks, I’ll fix it up.

  21. To me, the key is to understand that complex systems are fundamentally indeterministic.

    Note my careful qualification of “macroscale” :D
    That said, Newtonian physics only really breaks down at the limits. In the history of humanity I very much doubt that a quantum uncertainty has had an impact distinguishable from a chaotic deterministic system.

  22. To me, the key is to understand that complex systems are fundamentally indeterministic.

    Note my careful qualification of “macroscale” :D

    That said, Newtonian physics only really breaks down at the limits. In the history of humanity I very much doubt that a quantum uncertainty has had an impact distinguishable from a chaotic deterministic system.

    Well, I was talking about complex adaptive systems rather than chaotic systems. And yes, here quantum effects matter at the macroscale, since they are one of the enablers, if not THE key enabler of intelligent life (assuming you believe in free will).

    You sreduce a person to a set of Newtonian equations…

    (Apologies for the double post.)

  23. To me, the key is to understand that complex systems are fundamentally indeterministic.

    Note my careful qualification of “macroscale” :D

    That said, Newtonian physics only really breaks down at the limits. In the history of humanity I very much doubt that a quantum uncertainty has had an impact distinguishable from a chaotic deterministic system.

    Well, I was talking about complex adaptive systems rather than chaotic systems. And yes, here quantum effects matter at the macroscale, since they are one of the enablers, if not THE key enabler of intelligent life (assuming you believe in free will).

    You simply can’t reduce a person to a set of Newtonian equations…

    (Apologies for the double post.)

  24. Tel says:

    The point is that a lot of things in the real world defy discrete definition.

    Yeah, like the entire set of real numbers defies discrete definition.

    Let’s suppose you have your sand and you decide to use fuzzy logic. The archetypal “pinch of sand” might be 2 grams. The archetypal “pile of sand” might be 5 kg, while a “heap” is 100kg. So now you need a token to indicate pinch/pile/heap and you need a real number anyhow to indicate fuzzy membership of the set. Probably you need two records because you end up sitting between two archetypes so you have two tokens plus two real numbers.

    The other choice is just to record the mass of the sand, which is one real number.

    But the computer demands a precise number or rule, so we grant one.

    You can’t actually put a real number into a computer, but you do have a selection of well tested approximations to choose from (e.g. IEEE 64 bit floats) probably able to represent the number better than what you can measure it. Fuzzy logic requires exactly the same compromise.

    The “linguistic variable” factor is basically a human thing. It’s a convenient way to offer people something that feels intuitive and seems to fit their mental model. If you don’t have the necessary decades of research to provide a real physical formula, then having a piecewise approximation is better than nothing, at least then you have something you can explain to people. Maybe if you are measuring something like personality types, that don’t have any known physical units, might make sense in situations where no one knows the proper way to do it.

    Someone once showed me a self-learning fuzzy logic “anything controller”, where you just plug it in and it goes through a brute force trial and error approach, making a note of what works vs what doesn’t (it has absolutely no idea what it is connected to nor what the numbers mean). Eventually it builds up a large enough interpolation table to take control of the system. Impressive though that sounds, I’d be reluctant to use it for managing the cooling system on a nuclear reactor, or a financial system for that matter.

Comments are closed.