Evaluation: moving beyond well-meaning fads and towards a new professionalism

Cross-posted at the Mandarin.

Image result for new professionalism theory

The first image to come up in a search for ‘new professionalism’. A very nice image too, especially for Kandinsky lovers like myself.

Working in and around government for over three decades I’ve grown increasingly wary of fads. Remember ‘Reinventing Government’ which proposed government working more like the private sector, without carefully setting out how to do it? Today it’s design thinking, putting ‘users’ or the intended beneficiaries of programs at the centre of program design and delivery. It’s great that we’re trying to do so.

But we’ve been talking like this for at least two decades – since the ‘Third Way’ fad came and went. (Remember how ‘one size fits all’ wouldn’t do anymore? But again, we couldn’t work out how to do it.) And while promising demonstrations of design thinking proliferate – as they did for Reinventing Government and the Third Way, proving that there are some real prospects in the fad, we’re still miles from big systemic improvements. We’ve got some inklings of what we want, but we’re still stumped about how to do it.

The fads themselves are a symptom of the deeper, careerist malaise. The big civil service career rewards go to strategisers of high policy – orchestrators of fads from the top. Yet the knowledge we’ll need to transform systems around their users’ needs comes mostly from below – from workers in the field and those they serve.

Towards a new professionalism

We’ve been here before. As services in health and education were ramped up from the late 19th century on, we built professions to take us from fairly clear ideas of what we wanted, to working out the ‘how’.

Those professions were far from perfect, freighted as they were with the baggage of the governing class – a particular problem where users were from another class or ethnicity. They also erected barriers to entry. This increased costs and could nurture a complacency that delayed the response to emerging evidence – for instance, that hand washing reduced hospital infections.

On the other hand, professions were and remain the paradigm institution for fostering increasingly knowledgeable ‘communities of practice’. They did develop expertise in difficult practical tasks in the field. And the autonomy their status gave them provided some ballast for their expertise to influence outcomes, alongside the changeable imperatives of their managers.

Today managers direct the traffic. Current managerialism offers measurement and accountability aplenty. That certainly suggests objective standards. In principle this could offer an antidote to professional complacency1. But it’s mostly driven by what Jakobsen et al call the “political demand for account giving”. Thus, the motive force behind monitoring and evaluation in the civil service is bureaucrats’ and politicians’ joint need to be seen to hold the system to account, not the need for program or system-wide learning2.

In response, I propose a new agency – the evaluator-general. Perhaps because this name implies some borrowing from the status of the auditor-general, this has been taken to involve centralising monitoring and evaluation from above. That’s technically true, but ironic nevertheless, for my objective is to decentralise and empower the knowledge and strivings of those in the field so that, to the extent that they are independently tested and validated against the evidence, they are given substantially greater weight than they are now.

And, for as long as it remains under their sole direction, no amount of wishful thinking will prevent the institutional imperatives of senior managers and politicians from driving the design, operation of and reporting from monitoring and evaluation. Expertise – in evaluation and from the field – needs a seat at that table.

Operationalising the demarcation between delivery and understanding impact

Consider the agencies that provide foundational information and integrity in Britain – for instance, the National Audit Office, the Meteorological Office, and the Office for Budget Responsibility. They have independence to insulate them from influence by those within politics and the bureaucracy whose circumstances force them into an intense preoccupation with ‘messaging’. If our system is to function, let alone learn, they must ‘tell it like it is’.

Though it has hitherto only existed at the level of agencies, I envisage the evaluator-general as being an institution through which a new demarcation is operationalised at all levels of the hierarchy, between delivering programs on the one hand and understanding their impacts on the other.

Thus a line agency directed by a political officeholder would deliver a program – or commission its delivery from competing providers – but the evaluator-general would independently oversee and resource the program’s ‘nervous system’ – its monitoring and evaluation.

For this to work well, those delivering and those monitoring and evaluating services would need to collaborate closely. Physically, they would work alongside each other within the delivery agency and in the field. But the evaluator-general would have ultimate responsibility for monitoring and evaluation in the event of disagreement between its own and delivery agency’s officers. And, subject to privacy safeguards, the monitoring and evaluation system’s outputs would be regularly published with appropriate comment and analysis.

Monitoring and evaluation would have the primary objective of helping those delivering services measure, understand and thus continually improve their impact. Accountability to those ‘above’ the service deliverers in the management hierarchy would be built in the first instance from this self-accountability of those in the field, who would now have an expert critical friend, or in Adam Smith’s words, an impartial spectator. Toyota revolutionised the efficiency and quality of car manufacture similarly – by building its own production system around the self-accountability of production teams.

Where cooperation was poor, the efficacy of the system would be degraded, though I doubt it would be less useful than what we have now. Moreover, it would be visible and so, one hopes, corrected.

Thus the new arrangements are intended not just to give some ballast to evaluation expertise and the knowledge of those in the field against the changeable institutional imperatives of senior managers and politicians but to do so by reference to validation against the evidence. This would also strengthen the efficacy of the profession involved and stiffen its discipline against complacency.

The objectives of the new arrangements

The finely disaggregated transparency of performance information made possible by these arrangements would support;

  • the intrinsic motivation of those in the field to optimise their impact;
  • public transparency to hold practitioners, their managers and agencies to account3;
  • more expert and disinterested estimates of the long‑term impact of programs to enable a long‑run ‘investment approach’ to services; and
  • a rich ‘knowledge commons’ in human services and local solutions that could tackle the ‘siloing’ of information and effort within agencies.

With journalism and political debate increasingly given over to spin, the public sector can strengthen its own independence from this process by strengthening the expertise, resourcing, independence and transparency of the evidence base on which it proceeds.

Footnotes

  1. See Gruen, N, 2019, “Accountability: from above and below” (forthcoming).
  2. As Jakobsen et al put it, the “performance metrics are politically viewed as a legitimate and necessary” way of satisfying “political demand for account giving”. Jakobsen, M.L., Baekgaard, M., Moynihan, D.P. and van Loon, N., 2017. Making sense of performance regimes: Rebalancing external accountability and internal learning, Perspectives on Public Management and Governance, 1(2), pp.127-141 at 128. In consequence, form often trumps substance.
  3. By publicly identifying success as it emerged, an evaluator-general would place countervailing pressure on agencies to more fully embrace evidence-based improvements, even where this disturbed the web of acquired habits and vested interests that entrench incumbency.
This entry was posted in Uncategorized. Bookmark the permalink.
Subscribe
Notify of
guest

5 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Alan
Alan
5 years ago

You may care to read Chapter 9 of the South African constitution which establishes a set of ‘State institutions supporting constitutional democracy’. That is such a mouthful they tend to speak about ‘Chapter 9 institutions’. Constitutionalising watchdog/monitory/Chapter 9 institutions is a world wide thing.

Jamaica has an official called the contractor-general you may care to look into.

Vern Hughes
5 years ago

I am not averse to a ‘new professionalism’ amongst the managerial class in government. But it is a second order priority for those of us who live and work outside government, and outside the prerogatives of its managerial class. Our first priority is a seat at the table, at their table, so that our agenda for person-and relationship-centred service delivery can be presented in a way that is free of what Nicholas calls “institutional imperatives”. For us, these “institutional imperatives” mean a raft of assumptions and practices that enable government to work for its apparatchiks, and not for us.

Nor am I averse to the continued efforts of those who have professional interests in government service delivery from seeking a winding back of the mandarin culture and its replacement by what they would call ‘professionalism’. But for citizens, consumers, carers, parent navigators of silo-based ravines, and scarred purchasers of therapeutic goods in conditions of unmodified Soviet-model oligarchy, this is not our game. Our game is about upending rigged provider-centred systems in our favour. If those in the business of evaluating provider-centred service delivery regimes for a fee can do it in a more professional way and not forfeit their eligibility to earn a crust in the process, that’s ok. But it is, on balance, unlikely.

Stephen Duckett
Stephen Duckett
5 years ago

Thanks, and I think it is a good proposal, but we need to think about how it fits into the array of existing evaluation mechanisms, and how these need to be improved. The most obvious one which needs dramatic change is the Senate estimates process, which is often now a ‘gotcha’ process, rather than truly functioning as holding to account.

My observation is that often, collecting voluminous reports from funded agencies, without any apparent analysis of what they show, is somehow seen as protective in the Senate processes. Reporting without added analysis, is a shocking waste of money.

As to the evaluator general, I think there are two concepts that you have a play here. Firstly, and most importantly, is the formative evaluation, involving people committed to improving how the program is working. We don’t do enough of this, and a commitment to do more and to make the results available, warts and all, would be very valuable.

Secondly, is summative evaluation. Too often this is now contracted to external consultants, with the clear objective of justifying and demonstrating the worth of any program. The draft consultancy reports are massaged to ensure that positive words dominate. An external, independent, process of evaluation would certainly help the democratic process and improve policy outcomes.