What works: getting to the land of ‘how’: Complete essay

Note, this essay was published in three parts in the Mandarin and is published in consolidated form (complete with its footnotes) here.

It is impossible to remember, until one gets in the country … that they care about their experiment more than about making things work.

John Maynard Keynes on Soviet Russia, to Lady Ottoline Morrell, May 2, 1928.1

Part One

I.         The land of ‘what’ and the land of ‘how’

From at least the years of the ‘third way’ in the 1990s under Blair and Clinton, we’ve been hearing what governments need to do to address our various social problems. Again and again, ‘thought leaders’ tell us what we must do – move beyond one-size-fits-all services to ‘joined up government’ to ensuring that programs do things ‘with’ people rather than ‘to’ them. Plausible as they are, these ideas have barely been tested. Because if they tell us what we have to do, we’ve scarcely learned how.

At the outset there seemed to be a seductive straightforwardness to getting to how. As Bill Clinton put it “nearly every problem has been solved by someone, somewhere”. The challenge was “to find out what works and scale it up”. For me these words stand as a creation myth of the problem I want to address. They even show us original sin, because if you pay close attention there it is! Clinton suggests we learn how to solve our problems by learning a ‘what’ – what works – and then scaling it.

Yet here we are nearly three decades on and despite numerous promising innovations in small scale programs, they all share the same fate. It’s hard to think of a single example of action to address social problems that’s started small and been ‘‘scaled’ as Clinton proposed. Yet despite endless inquiries into our failure to address social problems – for instance aboriginal or multi-generational disadvantage and endless restructurings and resettings of policy in response, we’ve never got far. Peter Shergold lamented the problem in 2005.2 Yet despite a term as the nation’s chief public servant, he conceded in 2013 that the problem remains.3

This is the first installment of a three part essay which itself is part of a larger project. In this article I’ll set the stage showing the subtlety and depth of the problem. For, when it’s pointed out, we all understand that there’s a difference between ‘knowing what’ the rules of tennis or chess are and ‘knowing how’ to play. My claim is that in all kinds of ways we insensibly confuse the the two and so substitute ‘knowing what’ (or ‘knowing that’) and knowing-how. In this first part of this essay, I’ll show how this happens in our universities and the professions they teach. In the second, I’ll show how this occurs in government agencies and programs. The third part concludes with a look at recent initiatives like nudge units and What Works Centres that seek to foster greater ‘knowing how’ and innovation in government. The key to their success so far has been the way they bolt on to business-as-usual and, in so doing, improve it. This essay is written to try to articulate how they might envisage a more ambitious future.

II.         From the foundations to the commanding heights

Our confusion in understanding the distinction between ‘knowing that’ and ‘knowing how’ goes all the way back to our language itself. The lives we lead are built on a vast repertoire of tacit knowhow that we don’t and indeed can’t make fully explicit. Prosaic examples include anything taught by doing like riding a bike or kicking a football. But this stretches to skills you use all the time. Like those you’re using now – to help you interpret the strange squiggles your eyes are scanning while your brain converts them insensibly into meaning. These tacit skills are fundamental to making social and economic systems work, not least the judgements people make about what constitutes good work and what does not.

This issue is not discussed much in public administration or in economics, but one tradition has developed its ideas with close attention to it. As John Gray explains, Friedrich Hayek’s case for the indispensability of markets wasn’t just that they harnessed explicit local knowledge and expertise distributed throughout the economy. It is:

the far more fundamental problem of the practical knowledge on which economic life depends being embodied in skills and habits, which change as society changes and which are rarely expressible in theoretical or technical terms.4

Hayek further intimated the prospect that English speakers were disadvantaged in attending to the distinction between ‘knowing that’ and ‘knowing how’. The English language provides a more limited vocabulary for appreciating the distinction than many other European languages such as German. It was largely invisible to English speaking philosophy until Gilbert Ryle published on it in 1946.5 There’s much to be unpacked here. In the academy the status of ‘knowing that’ is far higher than ‘knowing how’. I expect this is particularly pronounced in the Anglosphere. Knowhow is often gained via learning-by-doing within an apprenticeship rather than via the discursive methods of traditional teaching. This apprenticeship tradition remains much stronger in Germanic and Northern European cultures than in the Anglosphere.6

The great economist, cyberneticist and organisational theorist Herbert Simon is one of the few English speaking thinkers to foreground this distinction between ‘knowing what’ and ‘knowing how’.7 For him, the distinction marks a faultline: On one side lie the sciences; on the other the professions. Sciences are for knowing what is given and necessary in nature. By contrast, the professions, such as engineering, medicine, business, architecture, and painting are for doing. They’re concerned “not with the necessary but with the contingent, not with how things are but with how they might be”. In short, Simon concludes, the professions are all fundamentally about design – that is, with devising “courses of action aimed at changing existing situations into preferred ones”. Here’s how I’d put it.

Science is about the universe and what is necessary. Design is about the multiverse and what is possible.

This highlights the irony of a process that reached its apogee in the decades after WWII. Seeking to clothe themselves in the mantle of science, the universities teaching the professions emptied their curricula of design content:

Engineering schools gradually became schools of physics and mathematics; medical schools became schools of biological science; business schools became schools of finite mathematics. The use of adjectives like “applied” concealed, but did not change, the fact. It simply meant that, in the professional schools, those topics were selected from mathematics and the natural sciences for emphasis which were thought to be most nearly relevant to professional practice. It did not mean that design continued to be taught, as distinguished from analysis.

Any economics graduate will appreciate Simon’s point. As he put it, “in terms of the prevailing norms, academic respectability calls for subject matter that is intellectually tough, analytic, formalizable, and teachable”. In Part Two I show these same confusions turning up in the world of work. There, the drivers are not only the greater prestige of ‘knowing that’ over ‘knowing how’ but also its greater ‘legibility’ to systems, its greater scalability (real or imagined), particularly to those at the top.

Part Two

III.         From the commanding heights to everyday routines

The big public sector careers are built in the land of ‘what’ – that is in designing and administering policy in our social services systems in our capital cities. Meanwhile the land of ‘how’ is the land of ‘street-level bureaucrats’. It’s out in the sticks. And that’s inauspicious territory from which to build a public sector career. It’s a mark of how disconnected from the ‘how’ those in the centre are that, as I’ve previously documented, they imagine that something like a market in ‘what works’ either already exists or will somehow construct itself around opportunities as they present themselves. And so, somehow, our complete failure to build even the rudiments of a system that might detect and then seek to expand what works wherever we find it goes unremarked. This is true of casual discussion, of senior bureaucratic and political leaders’ prepared speeches, in academic work and in the stream of independent reports every year or so including the recent comprehensive Thodey Review of the public service.

Of course people understand the difference in principle between ‘knowing what’ and ‘knowing how’ when their attention is drawn to the distinction. But they’re generally unaware of the way ‘knowing what’ has come to substitute for knowing how throughout their world. Because organisations and systems can’t easily scrutinise their agents’ know-how, it is made legible to them via the proxy of knowing what. Employees’ credentials come to stand for their know-how. As entrepreneur and essayist Paul Graham has written, this process begins at school and most of the confusion between the two occurs insensibly:

For me, as for most students, the measurement of what I was learning completely dominated actual learning in college. … Getting a good grade in a class on x is so different from learning a lot about x that you have to choose one or the other, and you can’t blame students if they choose grades. Everyone judges them by their grades — graduate programs, employers, scholarships, even their own parents.8

It took Graham the best part of his professional life to understand what was really going on, and how hard it was to get the high achievers he mentored in start-ups to focus on the real ‘how’ of their job – how to make great products – rather than looking round for tests to ‘hack’. (In that context this involved looking for tips and tricks to attract the money of investors and customers).

Where know-how pervades an organisation, the damage done by these expedients will often be contained. Credentialism may dominate recruitment, but then the actual contribution of those recruited can loom larger in the system’s operations. This will often be the case where lack of know-how is made conspicuous by failure – as in the case of engineering – and/or where professional knowledge is given independent status within organisations – as with medicine. Elsewhere, the professions are both less well trained and enjoy less independent status. This is true in teaching, social work and public administration.

Here the various makeshifts by which organisations govern know-how – such as KPIs – can take on a life of their own just as tests come to dominate what is learned in school. This process is also a major factor in consulting. The biggest firms – now known as ‘brands’ – deliver professional services from people with some of the best resumes available in law, engineering, policy development, management, communications and PR. And whether they really have the appropriate know-how for the task at hand, or however well their incentives align with the longer term interests of their client, no-one ever got sacked for hiring them.

The point made above regarding the disconnect between knowing that and knowing how in the professions might seem somewhat rarified. After all, economists and engineers tend not to deliver services on the ground. Yet the imaginative landscape implied by their discipline and other high-status professions aspiring to scientific status casts a long shadow. The tendency for ‘knowing what’ – or various imitations of it – to take precedence over ‘knowing how’ characterises the way social policy is designed and delivered.9

Thus for instance, the New Zealand Treasury is proud of its wellbeing framework on which it’s worked for around a decade – and far more than Australia’s desultory exercise. On the arrival of Prime Minister Ardern who was keen to promote wellbeing as a policy objective, it enabled the rapid production of a wellbeing ‘dashboard’ for policymakers. In principle, it can answer questions like “what is the level of Māori wellbeing in Christchurch?”

This information may prove to be valuable or otherwise. But it’s from the land of ‘what’ and tells us surprisingly little if we want to visit the land of ‘how’. It offers no direct insights into how one might improve Māori wellbeing in Christchurch. Yet this was surely the point of the exercise. As I’ve documented elsewhere, this is par for the course in most jurisdictions. We might equally invest in a dashboard showing us the prevalence of headaches without noting that people have used salicylic acid to moderate headaches for three and a half millennia. It is now marketed as aspirin.

When embarking on some difficult new venture, or the refurbishment of an old one, the habit is not to start from a stocktake of what knowhow and capability is available to the system in the field. Nor is it to identify where performance has been best to ensure that those people and their insights help steer the process of lifting the system’s capability more broadly. This would be analogous to the way evolution and markets build from outbreaks of what works best wherever they occur in the field rather than designing and imposing a plan from the top-down.

Rather than this, the construction of frameworks becomes alluring. Frameworks are from the land of ‘what’. It might be the case that some frameworks embody very deep and rigorous thought. But no-one scrutinising them from the outside could ever tell. For, just as certain approaches to economics tend to ignore whether their subject is silicon chips or potato chips, so, frameworks are written in a language which is largely independent of the work being done. They tie together inputs, outputs, outcomes and an overarching ‘vision’ all strung together with pleasing abstract nouns or adjectives. Words like ‘equitable’, ‘effective’, ‘efficient’, ‘accountable’ and ‘sustainable’ often figure as if asserting them vouchsafed their presence in the program.

All this makes the construction of frameworks an ideal displacement activity for the generalists running the show. It displays their busyness leading and organising the troops while keeping them entirely insulated from having any skin in the game. If a bridge falls down or billions go missing, that word ‘accountable’ is in the framework for a reason. The malefactors can be rounded up from the land of ‘how’. As if betraying unquiet consciences at this vacuity, such framework documents frequently arrange their pleasing words into strategic diagrams which suggest specific relationships between them. Yet it’s surprising – or perhaps unsurprising – how gratuitous these diagrams are. Or on occasion worse. For on taking them seriously, it’s remarkable how often the relationships they imply actually make no sense or worse, betray the ultimate confusion and magical thinking at the heart of the project they illustrate.

At this point, it may have occurred to you that know-how is much harder to govern, and so a much harder thing to deliver than knowing what. It’s also often much less ‘scalable’. So not only have I no magic wand to wave that will fix it all, but if I did, I would probably be perpetrating the very thing that I’m critiquing — ‘thought leader’ peddled fads. But there are no shortage of them, in TED talks and endless elite international events repackaging third-way nostrums from the land of ‘what’ when we desperately want admission to the land of ‘how’.

Be that as it may, in Part Three, I’ll take a look at three new kinds of institutions designed to win new know-how and scale it in government service delivery: Government innovation ‘labs’, behavioural insights units and What Works Centres. The key to their success so far has been the way they bolt on to business-as-usual and, in so doing, improve it. But as I’ll explain, if these acorns are to grow into the oak trees that might herald a mature system in which know-how is won and ‘scaled’, they’ll have to move beyond their own tendencies to substitute knowing-what for the more difficult and distant destination of knowing-how.

Part Three

IV.         New ways of institutionalising know-how

Given governments’ evident failure in rising to the challenges of the third way, it’s not surprising that there are various promising initiatives intended to pursue better know-how in government. They are usually thought of as part of the ‘innovation in government’ agenda. These initiatives must germinate and grow within a much larger incumbent system to which they must make themselves useful. So they have begun as incremental programs. The question now is whether they will remain forever incremental. My critique below is not offered in any spirit of disapproval – rather the opposite. Instead, it pursues a fond hope that, with sufficient care to understand our situation and patient work to improve it, the initiatives we see before us are but acorns that might grow to a forest of oak trees over the next few decades.

Government innovation labs are a small, and so far relatively tokenistic nod towards the idea that innovation in government agencies cannot be specified in systematic ‘knowing what’ that might be imported from elsewhere, but must be won in the development of new ‘knowing how’. They house activities such as prototyping, human-centred design and small scale experiment where failure is regarded as a normal and necessary foundation for working towards success. They have certainly led to some worthwhile new initiatives and improvements on old ones.10

Behavioural insights or ‘nudge’ units have also proliferated, receiving far more recognition and status than labs, perhaps because of their alignment with a new development in the academy – the import of psychological research into economics known as ‘behavioural economics’. One of their stocks-in-trade is A/B testing which was pioneered in early 20th century media and subsequently adopted far more widely. This is used to optimise outcomes from government communication such as tax arrears letters and SMS reminders.

Once such know-how is won, it is typically easily scaled – not requiring any particular skill from those that use it. Behavioural insights units also capture the know-how residing in academic journals – for instance about the setting of defaults to optimise outcomes – to convert it into a ‘what’ that can be readily ‘scaled’ or applied system wide. Still the focus on testing the usefulness of interventions large and small before deploying them – via A/B testing or more elaborate randomised controlled trials – has produced large benefits compared with the small scale of the investment in such units.

Both nudge units and innovation labs have been helpful in introducing new ways of working. Thus for instance, as David Halpern reports, his own behavioural insights or BI unit embraced “a healthy dose of ethnography – a ‘method acting’ approach to policy – as an essential ingredient in translating BI-inspired ideas into the real world”. Likewise the Unit sought to maximise the use of feedback, not just to manipulate behaviour in presumptively beneficial ways, but also to ensure government systems themselves were continually improving their performance.11 One might have hoped such methods were already well entrenched in social service policy and delivery. But the sad truth is that too often they’ve been conspicuously absent.

‘What Works Centres’ are another such institution. As Mulgan and Breckon put it the idea behind them is “very simple”:

It’s to help busy people who make decisions access the best available knowledge about what works. … They do this by orchestrating the best available evidence and making it usable for policymakers, for public servants, and for the wider public. Experience has shown again and again that it’s not enough to gather evidence and put it into repositories. Unless users are closely involved in how evidence is shaped and made accessible, behaviour is unlikely to change.

The proponents of What Works Centres are aware of the problems of context. Thus in one article Mulgan stresses the need to ask not just ‘what works’, but “for who, when, where and with who.” Just as with the other kinds of institutions mentioned above, What Works centres try to involve practitioners and to tailor their output to be useful to practitioners in the field. In this they help cultivate the public goods of professional activity, most particularly by nurturing the knowledge of communities of practice. They also stress the need for building “intelligent feedback loops” into service delivery.

Still, just as there’s much to the adage that what gets measured gets managed, so what gets codified in this process tends to be the core product of tips and tricks that ‘work’ with the caveats about context being downplayed. Thus for instance Mulgan celebrates the Crime Reduction Toolkit as a “good example” of the way What Works centres translate research into “useful products, distilling complex patterns into formats that can be used by busy professionals” in this case producing “a Which?-style guide at the College of Policing that weighs up the effectiveness of things like correctional bootcamps, CCTV or electronic tagging”. The tool is a table in which one can read off hundreds of potential interventions and see how they rank in five fields which rate the quality of the evidence available on the extent of their impact on crime, how they work, where they work, how to implement them and what they cost.12

Figure: The Crime Reduction Toolkit

V.         Conclusion

For as long as we continue to characterise what we’re trying to achieve with a name as amorphous as an ‘innovation agenda’, it’s hard to imagine it rising above anything more than being an add-on to business-as-usual – which of course guarantees that it will ultimately be constrained by the procedures and imperatives of business-as-usual. All the new initiatives I’ve identified operate as if the most prized kind of know-how they seek to generate is ‘tips and tricks’ – which is to say know-how that’s been codified so that it is a ‘what’ that can be applied more widely and ideally ‘scaled’ into whole systems. The A/B testing mentioned above provides the paradigm case of special-purpose know-how that, once won, can be scaled as a ‘what’. Given the straightforwardness of such techniques, it’s not too much to hope that they are increasingly built in to business-as-usual wherever they can be useful.

That such measures should be a priority makes perfect sense if one wishes to demonstrate ‘early wins’ on which one can gain some prestige within the system and parley it into building something more substantial. But to succeed in properly reorienting government service delivery towards knowing how to perform the difficult tasks society gives it – including most importantly of all, to identify and straightforwardly communicate where it has no such knowhow – would require a massive transformation of the system we have today. I intend to sketch possible elements of that transformation in subsequent essays. I have already sketched one such institution – an Evaluator General. It seeks to build accountability not on the accountability of those lower in a hierarchy to those above, but on those in the system – particularly the ‘street level bureaucrats’ out in the field – holding themselves to account for their practical achievements. Yet to effectively avoid wishful thinking such self-accountability needs to run the gauntlet of independent validation by others with domain expertise.

It is by way of such mechanisms that we might aspire to rebuild the know-how of the professions themselves around similar principles, rather than build them as they have been hitherto, on foundations of independence and prestige rendered accountable to a community of practice but much less to independent verification. But before doing that, in the next essay I want to sketch some additional kinds of cognition and action that systems need to take account of. Because social knowledge and cognition and so, improved social outcomes, require more than knowing how. They require both knowing-with and feeling-with others, and some faith in the likelihood of that taking place.

* Thanks to Gene Tunny and Paul Frijters for helpful comments on earlier drafts.

1 Skidelsky, Robert, 1994. John Maynard Keynes, vol. 2: The Economist as Savior 1920–1937 (New York: Allen Lane, p. 236.

2 As Shergold put it:

If there were a single cultural predilection in the Australian Public Service that I could change, it would be the unspoken belief of many that contributing to the development of government policy is a higher order function – more prestigious, more influential, more exciting – than delivering results. Perhaps it is because I have spent so much of my career in line agencies, learning to deliver indigenous, employment, small business, and education programs that I react so strongly against this tendency.

Mendham, Tim, 2005. “The State of Project Management”, CIO Magazine, 1 Nov, at https://www.cio.com/article/3494549/the-state-of-project-management.html

3 As he put it in 2013:

Too much innovation remains at the margin of public administration. Opportunities are only half‐seized; new modes of service delivery begin and end their working lives as ‘demonstration projects’ or ‘pilots’, and creative solutions become progressively undermined by risk aversion and a plethora of bureaucratic guidelines.

Shergold, P. 2013 “My Hopes for a Public Service for the Future”, Australian Journal of Public Administration; March 2013, Vol. 72, Issue 1, p.7-13 at p. 7, also available at this link https://www.apsreview.gov.au/file/174/download?token=d5v23YKW. Note that it exists elsewhere. See for instance Savoie regarding Canada’s public service:

The ambitious know full well that the road to the top is through policy, generating ideas, managing the blame game, being visible in Ottawa circles, and central agencies, not through program management.

Donald J. Savoie “What Is Government Good At? A Canadian Answer” 2015.

4 Thus Hayek often quoted the great Alfred North Whitehead:

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilisation advances by extending the number of operations we can perform without thinking about them.

5 Here’s Hayek citing that article:

The almost complete loss of the original connotation of ‘can’ in English, where it can scarcely any longer be used in the infinitive form, is … an obstacle to the easy discussion of these problems …. If a German says Ich weiß, wie man Tennis spielt this does not necessarily imply that he knows how to play tennis, which a German would express by saying Ich kann Tennis spielen. In German the former phrase states the explicit knowledge of the rules of the game and may – if the speaker had made special motion studies – refer to the rules by which the skill of a player can be described, a skill which the speaker who claims to know these rules need not possess. German, in fact, has three terms for the English ‘to know’: wissen, corresponding to ‘know that’, kennen, corresponding to ‘be acquainted with’, and können, corresponding to ‘know-how’.

Hayek, F.A., 1962. “Rules, perception and intelligibility”, in Hayek, 2014. The Market And Other Orders, The collected works of F. A. Hayek, Volume XV, Bruce Caldwell (ed), pp 232-53 at p. 233.

6 In English speaking countries university-based know-ledge is the fulcrum of professional training. Apprenticeship based learning-by-doing typically supplements this only to the extent that the alternative would produce obvious dysfunction. Outside of lower status manual trades, in medicine, law and engineering, practice-based training is now merely part of the pathway from university into the workforce via internships.

7 For readers interested in economists, it was one of the things that drove him to dismiss profit maximisation as a good way to think about what business managers did. It might be ‘what’ they thought they did, but the moment you focused on how they did it you had to imagine them doing something more prosaic and grounded in their practice – which Simon called ‘satisficing’.

8 He continues:

Why did founders tie themselves in knots doing the wrong things when the answer was right in front of them? Because that was what they’d been trained to do. Their education had taught them that the way to win was to hack the test. And without even telling them they were being trained to do this. The younger ones, the recent graduates, had never faced a non-artificial test. They thought this was just how the world worked: that the first thing you did, when facing any kind of challenge, was to figure out what the trick was for hacking the test. …There are certainly big chunks of the world where the way to win is to hack the test.

9 In economics, I’ve documented how this occurs with the crude distinction between ‘positive’ and ‘normative’ economics. The considerations issues raised by Simon above would be news to almost all economists – including professors of their field. Given this, economists imagine that normative conclusions about economic policy must follow relatively straightforwardly from positive models of the economy. (Yet the distinction between positive and normative should be the same distinction Simon references above between ‘applied’ analysis and ‘design’ or the knowhow to effect “courses of action aimed at changing existing situations into preferred ones”.)

10 For an example of what looks like useful work labs can do see “Box 4.6: The case of New Zealand drivers’ licences” in Lateral Economics, 2017, “Through thick and thin: Cultivating self-organisation in Australia’s regions”, July at p. 40.

11 See e.g.. Halpern, David, 1915 Inside the Nudge Unit, pp. 211–212.

12 Thus for instance the What Works Centre for Crime Reduction promulgates an online tool that “allows users to weigh up evidence on the impact, cost and implementation of different interventions and use this to help shape their crime reduction efforts”. It lists numerous interventions like “Victim Offender Mediation”. The centre says that it has high quality evidence on this intervention, and that it produces a decrease in crime, but adds that “some studies suggest an increase”. Be that as it may, it does not take much imagination to believe that such interventions could be done well or badly depending on the know-how of those responsible for them. But it’s only the know-ledge that features prominently. Moreover if know-how was the heart of success, this could remain largely invisible to this methodology. It suggests an alternative methodology – in which more authority and resources are given to the individuals and teams who are seen to have demonstrated their knowhow with superior results. And this hints at a polycentric order of professionalism.

  1. 1[][]
  2. 2[][]
  3. 3[][]
  4. 4[][]
  5. 5[][]
  6. 6[][]
  7. 7[][]
  8. 8[][]
  9. 9[][]
  10. 10[][]
  11. 11[][]
  12. 12[][]
Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
amanda keogh
amanda keogh
4 years ago

Great piece Nicholas. Couldn’t agree more that the problem of scaling what works is a persistent challenge to realising the benefits of social innovation. Have you come across David Snowden’s thinking on fractal engagement? He suggests that the best we can do is to find the parts of these complex adaptive systems that are producing good outcomes, and identify how we can encourage more of that in whatever capacity we have. It sounds tantalisingly simple, but as you’ve well noted, the art and science of finding the ‘how’ is far from that. Keen to hear your thoughts on Snowden’s work and looking forward to the next instalment.