Crowdsourcing credentials

I was at a PC function yesterday on ‘disruptive technology’ and said, in a rather crabby way, that I’d been talking about the significance of informing consumers about the quality of products for a long, long time and now, it’s only when people can actually see Uber and Airbnb in front of their eyes that they’re starting to think maybe it’s worth thinking about. As I said, this is basically Stigler stuff – from the 1950s. But little interest had been taken in it. Anyway there you go.

There’s something else I’ve been banging on about. The importance of credentialling on the merits, rather than by getting all the rent collectors’ wagons in a circle and arranging some cosy little occupational licensing agreement. I recall Anthony Goldbloom telling me that he wanted Kaggle to help the nerds in the corner who were gun data scientists get appreciated for their skill. Kaggle crowdsources and makes far more rigorous data science credentials – because it can rank data scientists by getting them to compete with one another.

Micro-economic reform, if it hadn’t degraded into deregulatory chat from the business class seats at 30,000 feet, might have made more of a thing about occupational training, licensing, credentialling and ongoing performance appraisal of high skill professions than warfies and mine workers, but alas that’s not how the game’s been played. No-one said that if we’re really going to get the best out of tele-health we’d better ensure that doctors looking after their incomes are the gatekeepers of the system.

Anyway, I was put in mind of all this when Tim van Gelder sent me this amazing article about how, just as the hoi polloi can discern with considerable accuracy who’s any good at playing footy, they can do something similar shown videos of surgeons.

Dr. Thomas Lendvay and other researchers’ … findings, published in the Journal of Surgical Research and the Journal of Endourology, show remarkable agreement between the evaluations of laypeople and those of surgical experts upon seeing videos of surgeons’ hands performing practice procedures.

What’s more, in one of the studies, it took just 10 hours to collect feedback from 1,500 laypeople versus more than three weeks from three busy surgeons. Further, 90% of laypersons’ evaluations included contextual comments to explain ratings, versus 20% of surgeons’ evaluations.

Imagine if the next wave of micro-economic reform we comprehensively redit competition policy with a particular focus on getting the best partnership between rapidly evolving AI and decision support tools and human skill. The problem is, you can’t just deregulate it all. You have to deregulate and re-regulate on the merits. Something we’re not so experienced at. Still  … no time like the present to start.

This entry was posted in Economics and public policy, Information, Innovation. Bookmark the permalink.
Notify of
Inline Feedbacks
View all comments