Guest Post by Felix Barbalet: The productivity of the citizen developer

Felix Barbalet is a data scientist and economist working in Canberra who has recently launched http://www.APSindex.com and https://www.APSjobs.info. He is a good fellow and on discussing his new websites with him, I suggested that he give us a post about the remarkable productivity that’s becoming possible on all the remarkable resources that are now available for nix.

Where is evidence based IT?

I would like to see a more formal treatment of technical debt and the cost of complexity in designing and building large (IT) systems.

There are countless examples of large IT projects failing or running well over budget. Sound policy development usually makes reference to an evidence base (and as Economists, we place a large emphasis on the quality of data behind assumptions) – but there is little in the way of evidence-backed IT forecasting.

Typical IT project costings are based on estimates of the work required and time/resources to complete the work. The fundamental problem with this approach is that it leaves the project open to significant planning fallacy (the term coined by Kahneman and Tversky in 1979); humans are inherently optimistic – we systematically underestimate the amount of work required to complete a task.

That in itself is not surprising (take a look at this list of cognitive biases, FYI) – what is surprising is that there is not a more formal treatment of this problem in large IT projects.

Frameworks that embrace iterative development (for example, Scrum) massively reduce the risk of imperfect planning because they remove the assumption of perfect foresight while encouraging a strong feedback loop (aka process improvement). While these concepts are by no means new in more mature fields like manufacturing – in IT development, we still have a long way to go.

Cognitive surplus is changing our world

Clay Shirkey’s 2010 TED talk introduces the notion of cognitive surplus: the spare brain cycles (which have always existed) that are now being amplified by digital technologies (which have only recently become ubiquitous).

Open-source software (free as in freedom) is, in large part, both the result and the driver of this explosion in cognitive surplus. It has lowered the barriers to an incredible range of technologies and capabilities. The network effect derived from allowing anyone to consume and contribute to an open-source project must be astronomical.

One great specimen of these effects being captured is Kagglethe platform for data-science competitions worldwide. Their business is a result of the huge number of professionals, and amateurs alike, looking for interesting problems to solve in their leisure time.

Another emergent feature of this cognitive surplus is the citizen developer – people who have embraced various computer science topics (for example, programming), but never formally studied computer science.

The citizen developer

The term Citizen Developer was popularised in the 1980s by James Martin in the book ‘Application development without programmers’ – it’s not something that is particularly new. What is new is that in the last decade the sophistication of the technology available to these developers has increased exponentially – resulting in a matched increase in productivity.

Today: We can access vast troves of data and then process it at scale using just a credit card. We can use a rapid application framework to maximise productivity and minimise development time while building an application. We can instantly deploy that application to the same infrastructure Google uses to power their billion dollar business. We can do all this from the comfort of our local coffee shop.

The only capital investment required is time. The ecosystems that support citizen developers are inherently open and defined by a fierce competition of ideas, but are vast enough that the long tail ensures a place for everyone. Where creative disruption is the new creative destruction and emergent behaviour is king.

Richard Hickey’s fantastic (but rather developer focused) presentation outlines some of the reasoning around the concept convention over configuration (or simplicity over easiness) – which explains why using standard open frameworks as building blocks avoids the problems associated with complexity in large systems.

This does not seem to be a concept that many large enterprises or Governments are comfortable with – the failure to apply evidence based project management to IT projects is only one symptom.

From the perspective of a citizen developer, questions of technical debt and complexity are front and center. How long is it going to take me to solve a problem using language A versus language B? What is the trade-off between my initial investment to learn a framework and its discounted future value? How much of my available time in the future am I going to have to spend maintaining a system if I build it on X vs Y?

Of course there is no single correct answer – but competition is the key. The open-source ecosystem is so productive because it encourages a multitude of competing technologies and platforms, each building on what came before, each open to innovation and to disruption.

For large enterprise and Governments, this is the secret – think and act like a citizen developer – because for the citizen developer failure is acceptable and inevitable but failing to learn from that is not.

This is a cross-post from the pivotal analytics research blog.

This entry was posted in Economics and public policy, IT and Internet, Web and Government 2.0. Bookmark the permalink.
Subscribe
Notify of
guest

22 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
desipis
desipis
10 years ago

The fundamental problem with this approach is that it leaves the project open to significant planning fallacy

That’s not how I see the fundamental problem. I see it as the fact that dealing with information systems involves inherently more uncertainty than other engineering type projects. Estimating IT related projects has more in common with scientific research or mineral prospecting. How does one estimate how long it will take to find the next big gold deposit, or how much money it’ll cost to cure cancer?

The biggest mistakes I see come from assumptions that information systems are just another widget you churn out from the factory. Thinking that if you just have enough data about previous projects you can accurately predict what the costs of a new project will be is part of the problem. Even if you could collect enough quality data from previous projects, by the time you have enough information about the new project to produce an accurate estimate you’ll be 90% of the way through the work anyway. That’s where iterative development processes have an advantage in that they don’t look or plan too far into the future, and so don’t suffer the same extent of rework that occurs when predictions turn out wrong.

The bigger problem I see is the assumption that a turn-key tender process is the best way to approach large IT projects. The better way to approach these issues involves two parts. First, to see information services as an on going and evolving service, and not as a series of distinguishable projects. Second, to realise the skills and system specific experience of the people doing the grunt work is an asset worth far more than the physical infrastructure or IP. In short IT professionals need to be tightly integrated with existing organisational structures, not segmented out into their own department or (worse) outsourced to an external entity.

Marks
Marks
10 years ago
Reply to  desipis

The point about it being a bias is that the errors are generally all in one direction. If it were merely that IT estimation was harder than say, estimating the cost of driving a tunnel where one has no good idea of the ground through which one burrows, then certainly the spread of errors would be greater, but those errors would be spread about the mean of the final cost and time performance outcomes.

desipis
desipis
10 years ago
Reply to  Marks

I’m not saying that cognitive bias isn’t an issue. I’m suggesting that it’s not the (only) issue that differentiates the difficulty of estimating IT projects in comparison to other projects.

Your example of road tunneling brings to mind the concept of known-unknowns vs unknown-unknowns. It’s the unknown-unknowns that in my experience that generally cause the problem. You can’t just average out the historical costs of things you aren’t even aware you will need to do.

Imagine if after you’d estimated the cost of your tunnel, accounting for a reasonable variance of ground types, and accepted the job, the client tells you that they consider a dozen connecting tunnels part of the job because if the tunnel doesn’t connect up to their existing network it’s useless to them. Or imagine once you’ve accepted the job, it turns out the boring machines you planned to use make too much noise for the residents and you’re barred from using them. You now have to plan to dig out the tunnel the old fashioned way.

These types of issue might be obvious variations, or reasonably foreseeable in the field of civil engineering where people have been building things for centuries, there are good standards for modern development and most people have a reasonable understanding of spatial and material issues. However these magnitude of unforeseen problems are still quite common when dealing with IT projects, and often people don’t accept their significance because they seem prima facie simple.

Tel
Tel
10 years ago
Reply to  desipis

Estimating IT related projects has more in common with scientific research or mineral prospecting. How does one estimate how long it will take to find the next big gold deposit, or how much money it’ll cost to cure cancer?

Computers can be programmed to to a great many different things. If you are building a business database to keep track of customers and invoices, you should be able to estimate pretty accurately what is involved. All the components are well known and well understood (except perhaps the business itself).

If you want to build the next great Internet search engine, or a machine that can describe the contents of a photograph, or a poker program that can beat champion human poker players, that’s a different story. All of these are IT projects, they are just different types of IT projects.

The biggest mistakes I see come from assumptions that information systems are just another widget you churn out from the factory.

Some information systems are just widgets to be churned out, some, but not all. Only experience can tell you which is which, and anyway next week the answer will be different.

Paul Frijters
Paul Frijters
10 years ago

This post confuses me no end. Let’s break it down: the main assertion in the post is that the forecasted costs of IT projects are usually way too low. This seems to be attributed to planning falacies in that people think they are smarter than they are and understand the world better than they really do. If only programming were construed as an iterative self-feeding open-source software process, things would be great.

I basically don’t believe a word of this. For one, in many IT projects you can simply contract out particular bits of IT where companies that go over cost have to bear those additional costs themselves, forcing them to become quite realistic. So people are not inherently too stupid, they are simply not always motivated to give a correct costing. Hence I presume we are not really talking about IT jobs in general but rather ‘vague’ IT jobs within the civil service, the generic case being a minister who doesn’t really know what he wants asking civil servants who have no idea what he wants and no idea what is possible, an estimate of costs. What are the usual reasons for such estimates to be far too low? The usual suspects:
– budget creep: the IT unit doing the project deliberately looks to add things in order to get more resources. The essential problem is then one of a weak budget constraint and an uninformed principal, not a planning fallacy at all. Obvious solutions includes an intermediary who does the diagnostic.
– credence good problems; the smart IT manager knows what the client needs, but the client himself does not know. Knowing that the client is unlikely to be willing to fork out the actual cost, the IT manager lures him in, then screws him over. Again, more an issue of imperfect contracting rather than a planning fallacy. Obvious solutions involve contracting out.
– stake-holder drift: as an IT project gets underway, all kinds of affected parties start to come up with objections (like copyright or privacy) that double the size of the project. Since the project does not truly have a single client but rather a potential community of clients, this leads to cost blow-outs because of outsiders muscling their concerns into an existing project. Solutions to this one are increased secrecy and deliberately vague initial goals so as not to alert those stakeholders one wants to avoid alerting.

Thinking about these problems, I for the life of me don’t see how thinking like a citizen programmer is going to solve any of them. So I clearly don’t get the point. What is the point then? Should the civil service be more open in its IT projects and invite an open-source software mentality to what it does? This is of course the whole government 2.0 agenda that of course has entrenched opposition from all the insiders set to lose control and influence. Is that then the point, that there vested interests in the civil service preventing more efficient processes and using bogus arguments, such as lower estimated in-house costs to prevent loss of control? If so, we should be discussing the political economy of those insiders. Etc. It seems the post is shadow boxing around the real issues.

desipis
desipis
10 years ago
Reply to  Paul Frijters

For one, in many IT projects you can simply contract out particular bits of IT where companies that go over cost have to bear those additional costs themselves, forcing them to become quite realistic.

This sounds like the sort of daft comment a PHB would come up with. Shifting responsibility doesn’t solve issue that occur due to lack of knowledge.

It’s also not as effective as motivating accurate estimates as it first might seem. While the contract might give you the legal right to force the external party to bare the cost, there’s a good chance that doing so is not in the best interest of the buyer. As I mentioned above, by externalising the work, the external party has control over the people involved in the project. This means that while the buyer might legally own the IP and physical assets of the project, they are still reliant on the third party to get any real value out of them.

If the buyer chooses to exercise their legal right they will likely ruin the relationship between the parties and are effectively throwing away everything they invested internally in the current project and starting again from scratch possibly years late. There is a fair chance that legal action is never going to fully recover the costs associated with abandoning the work and starting again from scratch. Then there’s the issue of internal politics of contract management where taking legal action could be considered a contract management failure significantly worse that a delayed or over budget project.

There’s also the issue you raised yourself “the smart IT manager knows what the client needs, but the client himself does not know”, only there’s a fair chance that the IT contractor doesn’t really know either. This makes it very difficult to write contracts around delivering an IT system where neither party really knows what’s going to actually be delivered.

desipis
desipis
10 years ago
Reply to  Paul Frijters

Usually the development of a system reveals unforeseen requirements

I’m curious how you would recommend estimating the cost of unforeseen requirements.

Marks
Marks
10 years ago
Reply to  desipis

http://flyvbjerg.plan.aau.dk/0406DfT-UK%20OptBiasASPUBL.pdf

These people have had a go at the problem for transport projects, but the process is applicable to all sorts of projects.

Marks
Marks
10 years ago
Reply to  Paul Frijters

Paul,

One of the reasons for crowd sourcing something like this is that it enables one to quickly get up some numbers on the degree of project cost and time over-run over a large number of projects.

As others have pointed out, and in my experience in general project management I have found, there is absolutely no magic contractual bullet that allows for a true fixed price for complex projects. All it takes is for the client to find one more thing that they need, and they can put in a price for that variation that kills the estimate. Contract law is very complex and many a time I have seen people who thought they were on a winner contractually, lose their shirts in court.

This does not obviate the need for people to clean up their acts contractually and otherwise as you have pointed out, but often enough, clients do not have the knowledge to effectively draw up a contract for specialised services in any event, and where it is for something being developed, it becomes even more nebulous. So a crowd sourced database which gives managers an idea of the extent of overruns in certain types of project is bloody useful.

Marks
Marks
10 years ago
Reply to  Marks

I meant to add.

As an example of where this is very useful is in feasibility planning, where someone comes up with a great idea…political pork barrelling for example.

In this case, if the initial bright eyed optimistic estimate done on the back of the envelope could be factored up by an appropriate amount with relative ease from a crowd sourced database that said such estimates for a particular type of project was X percent, many projects would not even see the light of day. Especially if the opposing political party were looking to trip the other party up.

And a good thing too.

Tel
Tel
10 years ago
Reply to  Paul Frijters

For one, in many IT projects you can simply contract out particular bits of IT where companies that go over cost have to bear those additional costs themselves, forcing them to become quite realistic.

Oh yes, you were intending to select a Pty Limited company for this task?

Tel
Tel
10 years ago
Reply to  Paul Frijters

Fixed-price contracts suffer from a double market for lemons. Information is asymmetric in both directions.

Yeah good point.

It is very difficult to evaluate the quality of buy-in software components, and that’s presuming the buyer even understands the importance of doing such an evaluation (most don’t bother). Then you run into the problem that contractual specification of the requirements is usually of a similar order of complexity to just writing the software.

Paul Frijters
Paul Frijters
10 years ago

Jacques, desipis,

No need to get excited, I am just trying to understand the point of the post and am perfectly willing to admit I am no expert at this! Your replies accentuate the point that the real issues probably have to do with political economy and labeling, not innate stupidity as the post claims. Desipis makes a good case that there are indeed issues of limited accountability, transaction costs, and ownership at the heart of the issues, including costs of legal actions. Jacques ‘Scope creep’ seems to indicate that the eventual product is a bigger product (still wanted by the original consumer) than the original product because the consumer discovers what he wants, which means one cannot really say that the initial estimate was wrong, rather that the original estimate pertained to a different job than the eventuating one. You are basically making my point, which is that the cost blow out has less to do with the mental deficiencies of humanity, more with old-fashioned missing markets, transactions costs, etc.

Fixed-cost contracts don’t work, period? Why are there so many of them then? I can only presume another agency problem at the bottom of this.

Marks
Marks
10 years ago
Reply to  Paul Frijters

There is a whole library full of literature on this. I personally think that it is a cognitive bias, having seen how people get incredibly enthused and emotional about their projects, quite outside the good housekeeping issues that you have mentioned.

People more often than not invest quite a bit of their pride and prejudice into their projects, and especially if there is someone senior on staff who is thusly impelled, the sky is the limit.

Andrae
Andrae
10 years ago
Reply to  Paul Frijters

“Fixed-cost contracts don’t work, period? Why are there so many of them then? I can only presume another agency problem at the bottom of this.”

Yes.

Actually, there is one form of ‘fixed-cost contract’ that is well known to work, and that is software-as-a-product of which you avail yourself every time you buy a copy of Microsoft Office or download a smartphone app. The problem is that only a very small minority of software can be generalised like this. For anything custom you run into all manner of agency issues.

Felix Barbalet
10 years ago

Paul you make good points. I agree that in theory you should be able to effectively “contract away” these problems, but in reality I am aware of very few examples of this applying to any IT project where the buyer wants something “innovative”.

I think if you narrow your analysis to IT projects that require innovation or delivery of something that needs to be developed with some uncertainty about how to do so then my argument might make more sense.

That is – contracts do appear to be effective whether there is a clear understanding of what the buyer wants and the contracting party knows how to deliver it (probably because they’ve delivered an identical product before).

But where there is uncertainty about what it is that the buyer wants and the uncertainty of how the contracting party is to deliver it then there appears to be a systematic under-estimation of the project scope and or costs.

In addition, I think we could agree that the larger a contract is, the more difficult (costly) it is likely to be to enforce (thanks Jacques – I was not aware of that body of research on project scope).

You say

one cannot really say that the initial estimate was wrong, rather that the original estimate pertained to a different job than the eventuating one.

I think that is perhaps the point – I think that the problem (irrespective of whether you describe it as an inability to contract effectively or as a cognitive bias) is amplified by the size of the project at hand.

The reference to how a citizen developer does things is meant as a suggestion for how to avoid primarily the huge cost of failure in large monolithic projects – that is don’t do them. Instead split things up, build on existing foundations (and the leading edge of existing innovation, which is my reference to open-source) and join things together using modular and open mechanisms. That is how the citizen developer must operate.

Yes these are all Gov 2.0 arguments, very true.

With those clarifications in mind I hope my post might make more sense. The post was not meant to be an economic analysis of contracting theory – that is far from my area of expertise. Thanks to all for the comments – very nice to read this discussion.

Tel
Tel
10 years ago

Egats! You solved the mystery. Simply chop up a $10M project into 10 separate projects each worth $1M. Chance of success goes up, and even if one of those projects does fail, you are only out of pocket by 1/10 as much.

And this guy also solved it:

(i) Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

(ii) Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.

(iii) Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.

(iv) Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.

The Bell System Technical Journal. Bell Laboratories. M. D. McIlroy, E. N. Pinson, and B. A. Tague. “Unix Time-Sharing System Forward”. 1978

desipis
desipis
10 years ago
Reply to  Tel

Simply chop up a $10M project into 10 separate projects each worth $1M.

Which seems like a great idea up until the point you have to spend another $5M getting those separate projects to work together in the way the original $10M project was intended to do.

Andrae
Andrae
10 years ago
Reply to  desipis

Make it 10 $1mil projects to do 10 things; and another 10 $1mil projects to get each to talk to the other 9. Now you are only $10mil over budget, and you have an extremely high chance that you will get something at the end for your money.

Compare this to a single $10mil project fear of which will inspire ‘planning-mirage’, and internal incentives will ensure it get’s ‘sunk-cost’ all the way up to $20-$50mil before it finally implodes under its own weight leaving you with nothing.

Which would you rather have?

Of course those same internal incentives will ensure that executives will choose the latter every time, as it is far more career enhancing to lead a $10mil through its first few ‘preliminary’ milestones; rather than co-ordinating a scrappy conglomeration of 10-20 projects, some of which will ‘fail-fast’ before you have had a chance to promote yourself out of the hotseat.

As with many issues, insiders have known how to solve them for decades, the real problem is how to engineer organisations so the known solutions can be deployed.

Patrick
Patrick
10 years ago

This discussion is interesting in it’s own right and quite apposite.

desipis
desipis
10 years ago

I think I agree with most of what’s covered in this summary.

Jorie Braunold
9 years ago

Hey all,

If you’re interested in citizen developers, I would recommend Jon Sapir’s book on the subject as it relates to Salesforce: https://www.createspace.com/4808463. Definitely worth the read!

Jorie