Standards: continued from Part One.
Why is this man smiling? He’s smiling because he is Charles Francis Richter and he came up with the Richter scale. And if you have come up with the Richter scale, every time there’s an earthquake, people who want to sound informed mention your name. And scientific tests prove that everyone — even small insects — like having their name mentioned.
But Charles has left the world with two problems. First, as Wikipedia reports:
Because of various shortcomings of [Richter’s] scale, most seismological authorities now use other scales, such as the moment magnitude scale (Mw ), to report earthquake magnitudes, but much of the news media still refers to these as “Richter” magnitudes.
Still, to retain their comparability to the familiar scale the scales developed in the Richter Scale’s stead “retain the logarithmic character of the original and are scaled to have roughly comparable numeric values (typically in the middle of the scale)”.
It gets worse. In fact we’re mostly uninterested in all these scales’ measurements, because we’re usually interested in the felt intensity of earthquakes in highly populated places. Thus for instance, one of Troppo’s apex nodes — Melbourne — recently experienced an earthquake which we were assured was 5.9 on the Richter Scale by people who are paid good money to look serious. By contrast, Christchurch’s 2011 eathquake was only 6.3 on the Richter scale and did vastly more damage. As analysis from Troppo’s Epicentre Analysis Division (EAD) reveals, the main reason for the disparity is that the epicentre of the Christchurch earthquake was in a suburb of Christchurch whereas the epicentre of the Melbourne earthquake was in Mansfield.
(Even here it wasn’t long before the Victorian earthquake produced shocks that were felt around the world, for instance in comments from Britain’s famously empathetic Prime Minister, but I digress).
In fact if you want to know the intensity of the earthquake in Melbourne you should be using a quite different comparative standard — a Seismic Intensity Scale. But who’s heard of that? And if you’re paid to look serious, that’s serious enough! You’re not paid to sound serious. Now maybe if Charles Francis Richter had been born to Mr and Mrs Seismic Intensity, we wouldn’t be in this situation.
But he wasn’t.
So we are!
II. Designing comparative standards that are fit for purpose
Welcome to the dilemmas of comparative standards. They are brought into existence for any number of purposes, but once they are, usage and familiarity sees them solidifying in use. Some comparative standards make good sense in their initial use. But then, they become so well known that they’re used to compare things they were never built to compare — or not in the way they come to be compared. Thus GDP was exceptionally well crafted to measure economic activity for the purposes of macroeconomic management.
Then it became a point of comparison between countries. Again, because of its rigour, it was useful for this in some respects. But its familiarity meant that it became more or less ubiquitous as a general measure of a country’s economic performance. It’s pretty good as an aggregate measure, but it’s substantially worse as a summary measure of economic welfare or wellbeing. It can be converted to a per capita measure easily enough but that leaves out the distribution of income (I think we should care more about median income per capita than average income or GDP per capita.). And then there are all the old chestnuts summarised by Bobby Kennedy that just because you spend money on things, they might not improve your wellbeing — things like guns, security equipment, military spending, prisons and so on.
Wherever they’re intended to generate information on merit, metrics of all kinds are competitive standards. And it‘s often surprising how little discussion they attract. KPIs within organisations are notorious for being taken too much at face value — setting off all kinds of misalignments between what the organisations and/or those within them are supposed to be about and what they’re actually doing. This then sets off invidious incentives where the metric is managed for rather than the outcome that metric is supposed to be capturing.
There’s always some discussion about the marking regimes of schools, but it’s often limited to global debates that such metrics will encourage ‘teaching to the test’. Something I’d like to see more attention to is whether we want such systems to identify those with the best conceptual grasp of their subject as the best or those who manage to train themselves to make the least mistakes.
And then there are comparative standards that arrive from outside the systems they operate as standards for …
To be continued.