A short column in the Age published today
Reduce the bugbears with some beta-tested policies
THERE’S a saying made famous by Eric S. Raymond, the author of the landmark book on Web 2.0, The Cathedral and the Bazaar. In computer geek speak, it’s this: “given a large enough beta-tester and co-developer base, almost every problem will be characterised quickly and the fix will be obvious to someone”.
Raymond went on to put it more memorably as Linus’ Law: “given enough eyeballs, all bugs are shallow”.
As it is putting what it hopes are the finishing touches to its next stimulus package, the Government would do well to remember Linus’ Law.
Governments make extensive use of public consultative methods in arriving at policies precisely to “bug-fix” them politically and technically and otherwise optimise policy outcomes. But they also think of them as cumbersome. Traditionally, they have been.
A government inquiry takes three months at the very least and, at the other end of the spectrum, the Henry review is taking more than 18 months.
But this is the 21st century, the age of Web 2.0, where it’s never been easier to tap into the wisdom of the crowd. What we need is a bit of “turbo crowd-sourcing”. The Government could start the whole thing by taking the package it is finalising and penciling the word “draft” into its title. It would then release the package, explaining the ideas behind it and leave it on the table for a brief period say two weeks with some website/blog dedicated to receiving public responses and thrashing through the issues. The policy would be optimised and finalised over that period, in response to feedback from all comers.
Of course the political process does this to some extent. But it typically does so in an atmosphere of maximum hysteria. Had the Government released the package of bank guarantees it introduced late last year as a draft and opened it up for a brief burst of feedback, some of the bugs might well have been fixed before it committed itself. Note that the bugs were not found and fixed by the political process, because those who ultimately criticised the package only did so after endorsing it in Parliament.
And, of course, who’s to say the improvements would be limited to bug-fixing. Our senior bureaucrats are some of the best in the world. But there are plenty of good ideas and constructive criticism the Government could profitably pick up from a sufficiently open process.
Of course the simple political explanation for why this doesn’t happen now is that governments like projecting an image of mastery. But if the Government straightforwardly conceded that we are all having to think our way through this crisis on our feet, if it said simply “we’re leading this process, but we need your help”, I can’t see it going over too badly can you?
As economist Henry Ergas memorably put it in an emailed response to this proposal yesterday, “I especially agree with your point on putting out proposals and having a genuine discussion: this would, in my view, be an enormous step forward. Fact is, we are all in a play with no script and no dress rehearsals, and it is stupid to pretend otherwise.”
This is an interesting idea but I think we need to bug-fix it.
To be pedantic, the book/essay was never really about Web 2.0 and the whole catchcry of “Web 2.0” came years later, even the name “crowdsourcing” has only recently become a buzzword. I’ll agree that on a conceptual level, these ideas are all related. On a conceptual level, the idea of a marketplace for ideas is nothing new and to some extent, Central Planning is analogous to the Cathedral (which also represents the mega-corp shrinkwrapped software industry) while the Free Market is analogous to the Bazaar (representing the Open Source software development process).
FWIW the philosophical section of Eric’s essay is here:
http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s11.html
To be fair to the Central Planners out there, the Open Source development of the Linux kernel development by rather large and wasteful rewrites of code. The USB drivers were rewritten three times (mostly for style reasons), the IEEE802.11 subsystem exists in two complete versions which are incompatible with one another, the entire firewall system was rebuilt from the ground up several times over (getting bigger and more complex each time). There are probably many other smaller cases of hard work being thrown away. Arguably, this process has resulted in a better end product.
This idea of “build it then burn it” is not unusual for any free market system. Some would argue that false trails are inevitable in any situation, others would suggest that good planning can avoid the wastage. I don’t really have an answer for that, my philosophy is that the freedom for individuals to be able to make decisions has intrinsic value in its own right and it’s a small price to pay when we know that some percentage of those decisions will be bad ones.
Yes, that’s mostly a problem with our media who will slap any government around for a “backflip” but hand a free pass to any opposition to take self-contradictory positions. In many ways, the opposition has an incentive to deliberately encourage the worst possible outcome in order to ferment unrest.
Thanks Tel,
I did think about the claim I made before making it – and I’ve read the Cathedral and the Bazaar. I think my claim remains valid if terminologically anachronistic. Linux is one of the core Web 2.0 phenomena in my book (and perhaps that’s not Gospel Tim O’Reilly I don’t know. I know it got going before the www had got anywhere much – if indeed it was going at the time at all – was it 1991?
Nicholas,
I have to agree with Tel here. You’re conflating “Web 2.0” with the open source movement generally.
Personally I have a dislike for the Web 2.0 moniker for two reasons:
* It’s ill-defined and these days can be whacked on to any piece of software running in a browser as long as it does some kind of AJAX magic or allows user tagging
* It became the inspiration for any number of other dodgy “2.0” claims — Enterprise 2.0 and Knowledge Management 2.0 in particular
But regardless of the vagaries of Web 2.0, it has two core features which cannot be dropped:
* the software platform must be web-based (ie it must have a web address)
* the content of the website must be modifiable by the users who visit
Contributed content may be “owned” by the individual authors/contributors and not be changeable by others, or the website may allow anyone to modify everything.
Calling Linux a product of Web 2.0 is both misleading and wrong because it wasn’t built using a web-based platform. The open and collaborative approach used exemplifies the open source approach, it has nothing to do with Web 2.0.
Many Web 2.0 sites exploit open source principles because it’s an efficient way to generate content, but the inverse is not true.
By the way “all bugs may be shallow” in an open source context, but that doesn’t mean that the set of features that people want will be obvious. Just look at some of the flamewars which have sprung up around whether to include or exclude a particular feature in a software package.
And unlike software, it’s not possible to “fork” legislation — that is, have multiple versions of bills that can be tailored to suit individual preferences.
So I suspect that crowd sourcing would end up more like the Wired experiment in collaboratively authoring a news article. The author/coordinator of that experiment, Ryan Singel, said:
And that was for a single non-consequential article in a magazine. Just imagine how much more vitriolic the “crowd-sourcing” would get if millions of dollars were at stake!
Public discussion would be an especially good idea for political hot potatoes. For example, it may be the only way Australia will ever get a republic. They could set up a prestigious, moderated blog and hold a competition with prizes for significant contributions.
150 years of crowd-sourcing is the explanation for Swiss policy success. They dont need the internet. Federally, every law is subject to a referendum if 50K signatures are gathered. That is not very many so the art of government is to avoid a referendum and the executive bends over backward to consult with absolutely everyone they can think of.
Well….
The Libertarian ideal is to have the smallest possible core legislation at the national level then leave as much as possible to regional government working in competitive federalism. People naturally gravitate to the region that suits them best and even though a great variety of lifestyles may exist, still everyone is happy (ummm, everyone other than the people who think that all other people should think and act exactly as they do).
This could (for example) work in the banking industry if we were willing to suffer multiple parallel currencies each with a different regulation regime. People who prefer a low regulation currency can move their accounts over to that system, others who prefer high regulation could move their accounts to that system. The overhead of running multiple currencies is that we must maintain a conversion calculator, and with computer technology this overhead is not such a burden. The benefit of running multiple currencies is that a blunder or shock to the system gets decoupled by the exchange rates and each individual can choose their distribution of risk between the currencies that they believe are most trustworthy.
So yes it is POSSIBLE to fork legislation, with a suitable approach to doing so. We choose not to allow such a system for whatever political reasons are best known to those involved (mostly because a central government will do anything to prevent the reduction of their own power and once a central government exists they tend to manage to find a bigger lever than anyone else and lean on it until they get their way).
Heh … good point, Tel. I did mean within our current framework of rigid national and state boundaries.
Of course, your argument applies more or less to our current system of States and Territories as well. The difference is that we have an oligopoly rather than a free market system.
By the way, it sounds like you would get a lot out of John Robb’s Global Guerillas blog. He talks a lot about the inevitable decline and fall of the nation-state as an outmoded monopolistic model, eg:
http://globalguerrillas.typepad.com/globalguerrillas/2009/01/protection-rackets.html
I don’t by any stretch agree with everything he writes, but it’s thought-provoking reading.
I can see solid similarities when viewed from the point of view of decentralisation of decision making power. The “merit” in a meritocracy is itself subject to being a matter of opinion which in turn requires democratic evaluation, so ultimately everything comes down to “might is right” eventually (sad to face that).
I see differences from this angle: wherever you have a marketplace for ideas, you also need a way for people to show real commitment to the ideas that they support (which means putting themselves at risk in some way and devoting some sort of resource to their cause). In a stock market they show commitment by putting their money at risk, in an open source environment they must prove the first generation of a coding concept by actually writing the code (if the concept takes a while to catch on, they will probably need to maintain several generations of that code before interest escalates). In a perfectly egalitarian environment, the only resource that people can put at risk is that they bother to vote at all (which is enough for crowdsource link-farms like Dig and Redit to operate). Since voting is generally secret, people don’t even put their reputation at risk. Low risk implies minimal consequence, thus minimal decision-making effort.
In the acting world, you are as good as your most recent movie. I suspect that the open source world is pretty similar. Let us suppose that Linus went completely insane tomorrow (touch wood that it doesn’t happen, this is purely a thought experiment) so how long would it take before someone else ended up doing his job? My guess is not very long at all (maybe a month). There is no way to save up “merit” in a bank account or anything similar. This does make a system that is egalitarian from a certain point of view (admittedly, not to everyone’s taste).
What a pointless discussion on Web 2.0.
As Woody Allen’s Mum says to his Dad in (was it?) Annie Hall, “Have it your own way, the Atlantic Ocean is a better ocean than the Pacific Ocean”.
I used the expression to convey a quite specific idea – the use of the internet to host collaboration including collaborative discussion. It isn’t an article about software terminology.
With respect Nicholas, if you think our discussion is only related to software terminology, then you haven’t been reading our posts very carefully.
You were the one who suggested that we could just “crowd-source” our way to better policy, and linked it to the success of the open source movement and particularly Linux. I don’t think Linus’ law applies to the situation you are talking about.
Open source works because a community can gather to build a common product that delivers mutual benefits to contributors. It also provides benefits to people who didn’t contribute without loss of utility to the original creators. Importantly, open source needs to withstand conflict through compromise by providing multiple pathways and options as part of the solution.
Authoring something like Wikipedia works (barely) because the opportunity cost of vandalising a page is far greater than the cost of reversing the damage. And again, the benefit of all the work is available to everyone, even if you didn’t contribute.
But if you “turbo-charge” public consultation in line with your suggestion, you are changing nothing about the process, just trying to speed it up. You’re not changing the likely set of stakeholders, nor are you changing the vested interests of those involved. And presumably the changes that get adopted from the public consultation process will still be those approved by a Parliamentary committee.
Last I checked, people can’t read faster whether they do it online or on paper So allowing online submissions won’t make a big difference to the processing time needed.
To make a significant difference to the review process, you would have to allow inline comments by people against each section of the proposed legislation, probably with breakout boxes to allow discussion of intricate interactions between sections. But by the time you get this kind of complex collaborative process up and running, the review process likely won’t be faster. If anything, it will be slower — albeit more comprehensive and truly inclusive of the public.
What’s more, true long-term benefits would only be realised by moving to a proper “open source” model. That is, hosting pieces of legislation on a central server and encouraging governments and citizens right across the world to continuously contribute to them. There could be sensible framework documents set up for widely-adopted legislation (such as copyright and defamation laws) where local amendment packs would be developed to avoid conflicts in existing legal frameworks and local moral codes. Now that I would like to see in Parliament!
Stephen and others,
The essence of the idea was simply that with things as complex and fast moving and difficult as they are with the GFC, taking it easy with the Mr Fixit stuff and instead putting out a draft statement to be finalised in a couple of weeks, is a more sensible way to go. The one comment on point seems to agree.
You’re right – I’ve not read all the comments carefully as they go off in a tangent which, is pretty deadly dull since it’s based on such gripping questions as ‘is Linux really Web 2.0’? Well the only way in which I would be interested in such a discussion is if I knew what the discussion was trying to achieve. If it’s trying to help me make my point, then it seems Web 2.0 seems OK, because it’s clearly being used as a tag line for the ‘collaborative web’. That’s the point of my use of the term. If you think it was ill-advised, fair enough. Seemed ok to me until an army of programmers came along ;)
The central idea I was discussing was the idea of opening something up to debate before finishing it off. Now that’s not quite what Linux does, or Google or Wikipedia or anything else. So the use of an analogy or simile is only of some use if it’s interpreted mutatus mutandis in sympathy with the general point of the article in which it’s used. It might well be interesting to cogitate about how we should define Web 2.0, but for me – it’s a tag for ‘collaborative web’ and Linux is collaborative web.
I wasn’t suggesting or trying to suggest that the existence of ‘Web 2.0’ would somehow transform what you are right in pointing out is just more consultation – I pretty much said that in the article. But it makes a difference and it’s a big difference and it is that we don’t have to hold a meeting to which everyone can go. We don’t have to print up and distribute paper, we don’t have to decide on invitations. We can let those who wish post to a blog in real time and put some resources into finding out who’s got something important to say on those blogs.
I think it’s worth putting out a provisional plan and then finalising it in a couple of weeks. You – Stephen – think one can only release a draft report and then ‘take submissions’ taking months. I’d like to see what I’m suggesting tried. Don’t you think it might have helped with the guarantees? I don’t think one can know until one gives it a go – as Clay Shirkey says, that’s what’s so good about Web 2.0, it lowers the cost of trying and failing. Then again I guess governments don’t like failure, because they get ridiculed for trying on A Current Affair. So perhaps you’re right.
I’m glad we’re all coming round to the idea that all the kings horses and all the kings men haven’t got a bloody clue what they’re doing at present, just that they’ve all got to be seen to ‘do something’ and therein lies the further catastrophe.
Nicholas,
I agree with most of what you’re saying, except that “Linux is collaborative web”. It’s not.
My main concern about your proposal is that bug-fixing in software can be empirically validated – has this fixed the observed problem? yes/no. So anyone can be allowed to contribute regardless of their level of expertise, because ultimately the test for acceptance or rejection is: does the solution work?
But there’s no such hard and fast rules for policy where it often comes down to someone’s best guess about what impacts a change will have. In other words, for good outcomes we need experts and co-ordinators.
So to sidestep further debate about technicalities, here’s a concrete proposal on how your suggestion might work:
(1) Ask people to register their interest in advance for consultation on government policy matters as an expert in given fields (people can also register on the spot, but this makes it easier to send invitations to relevant and interested parties)
(2) Set up a policy committee to review the legislation in question, preferably including both elected representatives and public servants
(3) Assign specific areas of responsibility — Mr A reviews social impact, Ms B reviews financial implications, Ms C checks potential for legal liability etc.
(4) Each committee members is assigned a collaborative space. They then invite people with relevant expertise to join the space where issues are raised and debated, with amendments proposed.
(5) After a couple of weeks, the committee reconvenes and debates the merits of the proposed amendments. These are then incorporated into the committee recommendations.
Opening the whole legislation for comments on any possible aspect by anyone is a recipe for mess and disaster. Getting Governments reps to coordinate feedback on specific aspects of the legislation, on the other hand, might just work.
This also bypasses the ACA problem, since there won’t be the same public visibility of the process.
And given enough snouts all troughs are shallow-
http://www.nytimes.com/2009/01/30/opinion/30brooks.html?_r=1&partner=permalink&exprod=permalink
Comment of the week: “Given enough snouts all troughs are shallow”. I like!
http://stimuluswatch.org/
Note, in the UK, a government report has just been released in ‘beta’ pending final release two weeks later – yes two weeks.