Here’s some claims about recent research on fintech and AI.
Berg, Burg, Gombovic, and Puri (2018) suggest that digital footprints can help boost financial inclusion, allowing unbanked consumers to have better access to finance. Similarly, Frost et al. (2019) show that fintech firms often start as payment platforms and later use consumer data to expand into some provisions of credit, insurance, and savings and investment products.
Yet public policy and (shall we call it?) ‘concerned advocacy’ approach such innovations in a highly asymmetric way. They don’t ask ‘what level of regulation would maximise overall good (Bentham) — or overall good to the most disadvantaged (Rawls). They ask “can this new technology produce invidious discrimination?” Almost inevitably it will. But the focus of policy and advocacy is then turned to minimising the downsides, not maximising the upsides or optimising the net outcome.
I’ve wrote about this in a slightly different context in 2017:
The big story in the excitement about DeepMind and Britain’s National Health Service is the way in which the interests and technical capabilities of private operators are dominating public interests and capabilities. But as right as it is to call that out, it’s only the first step towards better outcomes. We need to articulate what those public interests are, and then understand how best to build a world that optimises them. And while the Googles of the world have been building their preferred world for over a decade and show no signs of slowing down, the representation of the public interest has been far more tentative — politically, but also intellectually.
In fact, while there are no doubt exceptions to this, the same pattern is emerging in the application of AI much more generally. The case for restriction on AI emerges from left of centre activists. Their concern tends to be centred around identitarian categories — particularly in this case gender and colour — and much less around class and education. I’m seeking to place these issues in what I think of as their correct context, not play down their significance. (And yes, I got ethics approval to write this article and have counsellors waiting by in case trauma ensues).
How could we do better? I doubt we can do better by just articulating this as the issues are highly emotive and existing interest groups are in a difficult to shift equilibrium which is based around dumbing the message down to make it entertaining for the proles on mass and social media. I think we need to develop mechanisms of ‘meso-governance’ and ‘meso-politics’ as it were. Thus I’d like to see users’ councils being formed using sortition like mechanisms with people who are representative of users (but not self-selected by activism) being paid to spend a reasonable amount of time learning about issues and then reflecting the interests of their communities in the development of technology and governance.
I’ll be interested in your comments below.
Another example of something similar. Some people campaign against vaping because it is risky, without consideration of the extent to which it can lower risk by providing an ‘off-ramp’ for cigarette smoking.
Hi Nick,
the regulation of AI, recognition technology, financial traffic, digital money, and such things are hard to read. So many interests with hard to fathom agendas. Si many new areas of technology. So few politicians who have any idea what these things are about. And yet huge volumes of regulation I dont have time to absorb. Do you understand these things? I have the impression no-one does. I sort of doubt those citizen councils would get very far understanding this deeply enough to be of use, but I guess one can try.
The main thing I would want to know is if it actually at all possible to regulate new developments in AI? Wouldn’t the most innovative companies simply move elsewhere?
I don’t think it makes sense to formulate general principles on public interest in this field. Too vague. Much like a market economy, I suspect that we will find out via experimentation of lots of authorities what actually ‘works’, revealing the relevant public interests. Also, it would be good to have some authorities thinking hard about how to nationalise some of the functions of the internet so as to be able to tax transactions occurring.
Thanks Paul
I certainly don’t pose as understanding it.
Your approach seems to me to have missed my comment that I’m after a ‘meso-politics’. Of course you might think it impossible, and I don’t want to make too many claims for it. But the idea would be that if AI is being developed with a user group, that user group must be resourced to become an informed participant in the process.
If Google wants to ingest a bunch of patient data from the NHS, rather than some tortured ethics process — which is the default position now and which seems invariably to degrade into avoiding anything that might make people feel uncomfortable — there should be a group of ordinary patients who are paid to do their best to get on top of what’s going on, to agree or not to various practices and to be an ongoing part of the process.
The idea is that they should be in charge, though this would then depend on whether one could build institutions around them to access well motivated (and not careerist or otherwise power dominated) expertise.
there are thousands (if not more) AI developers all over the world. Lot’s of them in start-ups in developing countries. There is a huge market for Big Health data they can just buy. Indeed, there are thousands of data brokers out there.
So what do you actually have in mind, even for your NHS example? In charge of whom and what? How do you do meso-politics on such issues?
How about this radical solution.
Let consumers decide with all the information available.
One of the curious things I learnt from being part of a failed bid that involved a lot of AI stuff is that the actual scope of all the things you can and are done with it is enormous, most of which people have not the slightest idea about — basically making anything and everything more efficient. In this respect, I’m with Paul in that is going to be very hard to regulate (indeed, hard to see why you want to for most of it). There are probably a few technologies people have seen like face recognition that one might think about, but these are a speck in the ocean.
Thanks Conrad. That sounds right to me. But there are numerous ‘hotspots’ of regulation and I’d be interested in people’s thoughts about what they think of my idea in those contexts.