What works: getting to the land of ‘how’: Part Three

The Mandarin headed this part of the essay up with a picture of a woodpecker, which seems fair enough. But such ‘nuanced’ imagery, as we say these days is always off-brand here at Club Pony. Where too much directness is barely enough.

The final third part of this essay – cross-posted at The Mandarin, only here with footnotes.

New ways of institutionalising know-how

Given governments’ evident failure in rising to the challenges of the third way, it’s not surprising that there are various promising initiatives intended to pursue better know-how in government. They are usually thought of as part of the ‘innovation in government’ agenda. Their contribution both began, and remains incremental. This is unsurprising given that these initiatives must germinate and grow within a much larger incumbent system to which they must make themselves useful. My critique below is not offered in any spirit of disapproval – rather the opposite. Instead it pursues a fond hope that, with sufficient care to understand our situation and patient work to improve it, the initiatives we see before us are but acorns that might grow to a forest of oak trees over the next few decades.

Government innovation labs are a small, and so far relatively tokenistic nod towards the idea that innovation in government agencies cannot be specified in systematic ‘knowing what’ that might be imported from elsewhere, but must be won in the development of new ‘knowing how’. They house activities such as prototyping, human centred design and small scale experiment where failure is regarded as a normal and necessary foundation for working towards success. They have certainly led to some worthwhile new initiatives and improvements on old ones.1

Behavioural insights or ‘nudge’ units have also proliferated, receiving far more recognition and status than labs, perhaps because of their alignment with a new development in the academy – the import of psychological research into economics known as ‘behavioural economics’. One of their stocks-in-trade is A/B testing which was pioneered in early 20th century media and subsequently adopted far more widely. This is used to optimise outcomes from government communication such as tax arrears letters and SMS reminders.

Once such know-how is won, it is typically easily scaled – not requiring any particular skill from those that use it. Behavioural insights units also capture the know-how residing in academic journals – for instance about the setting of defaults to optimise outcomes – to convert it into a ‘what’ that can be readily ‘scaled’ or applied system wide. Still the focus on testing the usefulness of interventions large and small before deploying them – via A/B testing or more elaborate randomised controlled trials – has produced large benefits compared with the small scale of the investment in such units.

Both nudge units and innovation labs have been helpful in introducing new ways of working. Thus for instance, as David Halpern reports, his own behavioural insights or BI unit embraced “a healthy dose of ethnography – a ‘method acting’ approach to policy – as an essential ingredient in translating BI-inspired ideas into the real world”. Likewise the Unit sought to maximise the use of feedback, not just to manipulate behaviour in presumptively beneficial ways, but also to ensure government systems themselves were continually improving their performance.2 One might have hoped such methods were already well entrenched in social service policy and delivery. But the sad truth is that too often they’ve been conspicuously absent.

‘What Works Centres’ are another such institution. As Mulgan and Breckon put it the idea behind them is “very simple”:

It’s to help busy people who make decisions access the best available knowledge about what works. … They do this by orchestrating the best available evidence and making it usable for policymakers, for public servants, and for the wider public. Experience has shown again and again that it’s not enough to gather evidence and put it into repositories. Unless users are closely involved in how evidence is shaped and made accessible, behaviour is unlikely to change.

The proponents of What Works Centres are aware of the problems of context. Thus in one article Mulgan stresses the need to ask not just ‘what works’, but “for who, when, where and with who.” Just as with the other kinds of institutions mentioned above, What Works centres try to involve practitioners and to tailor their output to be useful to practitioners in the field. In this they help cultivate the public goods of professional activity, most particularly by nurturing the knowledge of communities of practice. They also stress the need for building “intelligent feedback loops” into service delivery.

Still, just as there’s much to the adage that what gets measured gets managed, so what gets codified in this process tends to be the core product of tips and tricks that ‘work’ with the caveats about context being downplayed. Thus for instance Mulgan celebrates the Crime Reduction Toolkit as a “good example” of the way What Works centres translate research into “useful products, distilling complex patterns into formats that can be used by busy professionals” in this case producing “a Which?-style guide at the College of Policing that weighs up the effectiveness of things like correctional bootcamps, CCTV or electronic tagging”. The tool is a table in which one can read off hundreds of potential interventions and see how they rank in five fields which rate the quality of the evidence available on the extent of their impact on crime, how they work, where they work, how to implement them and what they cost.3

Figure: The Crime Reduction Toolkit

Conclusion

For as long as we continue to characterise what we’re trying to achieve with a name as amorphous as an ‘innovation agenda’, it’s hard to imagine it rising above anything more than being an add-on to business-as-usual – which of course guarantees that it will ultimately be constrained by the procedures and imperatives of business-as-usual. All the new initiatives I’ve identified operate as if the most prized kind of know-how they seek to generate is ‘tips and tricks’ – which is to say know-how that’s been codified so that it is a ‘what’ that can be applied more widely and ideally ‘scaled’ into whole systems. The A/B testing mentioned above provides the paradigm case of special purpose know-how that, once won, can be scaled as a ‘what’. Given the straightforwardness of such techniques, it’s not too much to hope that they are increasingly built in to business-as-usual wherever they can be useful.

That such measures should be a priority makes perfect sense if one wishes to demonstrate ‘early wins’ on which one can gain some prestige within the system and parley it into building something more substantial. But to succeed in properly reorienting government service delivery towards knowing how to perform the difficult tasks society gives it – including most importantly of all, to identify and straightforwardly communicate where it has no such knowhow – would require a massive transformation of the system we have today. I intend to sketch possible elements of that transformation in subsequent essays. I have already sketched one such institution – an Evaluator General. It seeks to build accountability not on the accountability of those lower in a hierarchy to those above, but on those in the system – particularly the ‘street-level bureaucrats’ out in the field – holding themselves to account for their practical achievements. Yet to effectively avoid wishful thinking such self-accountability needs to run the gauntlet of independent validation by others with domain expertise.

It is by way of such mechanisms that we might aspire to rebuild the know-how of the professions themselves around similar principles, rather than build them as they have been hitherto, on foundations of independence and prestige rendered accountable to a community of practice but much less to independent verification. But before doing that, in the next essay I want to sketch some additional kinds of cognition and action that systems need to take account of. Because social knowledge and cognition and so, improved social outcomes, require more than knowing how. They require both knowing-with and feeling-with others, and some faith in the likelihood of that taking place.

* Thanks to Gene Tunny and Paul Frijters for helpful comments on earlier drafts.

1 For an example of what looks like useful work labs can do see “Box 4.6: The case of New Zealand drivers’ licences” in Lateral Economics, 2017, “Through thick and thin: Cultivating self-organisation in Australia’s regions”, July at p. 40.

 

2 See e.g.. Halpern, David, 1915 Inside the Nudge Unit, pp. 211–212.

3 Thus for instance the What Works Centre for Crime Reduction promulgates an online tool that “allows users to weigh up evidence on the impact, cost and implementation of different interventions and use this to help shape their crime reduction efforts”. It lists numerous interventions like “Victim Offender Mediation”. The centre says that it has high quality evidence on this intervention, and that it produces a decrease in crime, but adds that “some studies suggest an increase”. Be that as it may, it does not take much imagination to believe that such interventions could be done well or badly depending on the know-how of those responsible for them. But it’s only the know-ledge that features prominently. Moreover if know-how was the heart of success, this could remain largely invisible to this methodology. It suggests an alternative methodology – in which more authority and resources are given to the individuals and teams who are seen to have demonstrated their knowhow with superior results. And this hints at a polycentric order of professionalism.

  1. 1[][]
  2. 2[][]
  3. 3[][]
This entry was posted in Uncategorized. Bookmark the permalink.
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments