A little post in last weeks’ science news said that the South Korean government was thinking about the ‘golden rules’ of robot-human inter-relations. They are doing this because they sincerely believe that the technology of neural networks and miniature chips will pretty soon make it possible to have all kinds of thinking, learning, and mobile robots whizzing around. If a car can think for itself then you’d better equip it with some rules in order for humans not to be harmed.
What is immediately striking about the set of rules they are thinking of is that they hark back to Asimov’s rules, which incidentally were also the ones used in the film Artificial Intelligence. They basically boil down to doing as humans say and not harming any human. Apart from that, self-preservation is allowed.
When you think about it, the mind starts to boggle as to how extraordinarily easy it would be to fool a robot with such simple rules to do your bidding. A gangster could tell the first beefy looking robot on the street that he is faced with starvation unless the robot gets him some money from the safe of the nearest bank. The robot, equipped with the golden rule of not allowing any harm to come to a human, would oblige. A kid runs up to a car. The car, anticipating the kid will run into it and hurt itself, quickly reverses into another set of cars behind. Etc.: with a little bit of imagination it quickly becomes obvious that Asimov’s golden rules would lead to mayhem and abuse. You’re going to need ‘don’t do what people say unless’ rules and ‘harming a bad person who is about to harm innocent others is ok’ rules, and all sorts of other intricate rules.
When you think about it, having dumbo robots out there actively thinking about avoiding human harm will probably do more harm than good. Robots don’t just need a bit of intelligence: they need to be bloody smart before you can really let them interact with humans. They need to be able to figure out when someone is lying to them; to anticipate the chain of actions an evasive manoeuvre would lead to in a very short space of time; weigh several options in order to get the least amount of harm in real time; etc. They would need to be at a par in the strategic game of deceit and counter-deceit that humans play. Lacking a full set of mental abilities and the associated norms of conditional trust and scepticism would seem a recipe for disasters.
And then of course, there is the danger of re-programming. My robots versus yours?
Hence, the idea that a government can just set rules and no private company would over-write them is a bit naive. Unless you make robot creation a nationalised industry, you’re going to need robots policing other robots and perhaps even a national army to counter any possible private army.
The more I think about it, the clearer it becomes that simply writing a couple of rules to govern robot-human inter-relations is cloud cuckoo land. If we ever get robots to be smart enough to interact with us (I dont know how far they are, though the last lectures I attended on the subject 5 years ago seemed to suggest we’re still lightyears away from near-human intelligence) we’re looking at a whole new ballgame.
Asimov (note spelling) largely used the rules as a plot device, not a serious attempt at the philosophy of robot intelligence. When we get as far as a Roomba that actually cleans the whole floor without taking weeks to find all the nooks and crannies, get worried. Before that it’s just pie in the sky. We (I mean computer scientists, programmers etc) can’t even get expert systems running properly, let alone even simulate the most basic intelligence of something seemingly as simple as an ant.
We wont have to give them rules. The Robots at some point will subconciously begin establishing a religion with moral standards. When the robot figures out his intelligence is no match for the ultimate source of existence, he will aquire wisdom and faith in things that lie beyond all categories of thought.
Asimov’s robots were a lot more sophisticated than Paul’s post suggests, as demonstrated in at least one of the Robot (possibly one in which a human detective is investigating the “murder” of a humaniform robot) where a human (possibly a robot scientist) laughs at the suggestion that the robots were very simple.
In Paul’s example, the robot would either be able to tell that the person wasn’t actually starving or it would make some balance of probabilities guess about whether the person’s starving, and if they were starving, it would obtain food for the person rather than money. In the example about the car, all the cars would be able to react practically instantaneously to each other. In the Asimov world, humans are coddled by robots.
As people have commented here, by merely reprogramming a robot as to what a “human” was, robots could be made to do terrible things to humans.
Will Smith’s film didn’t bear much resemblance to any Asimov novel.
I’m sure I remember the novel “I Robot” from school – I remember being particularly affected at the time by one sad story involving a child destroying or allowing to be destroyed a previously-loved robot (? – I think that’s right).
its clear everybody else knows a lot more about this than me. I am desperately trying to remember which film said what. Artificial intelligence (Kubrick’s brainchild, finished by Spielberg?) was about a boy robot programmed for true love and ‘I robot’ (Will Smith) about robots deciding people were too stupid to take care for themselves. I think I got them mixed up in my blog above, but I found both good fun and clever.
The thing about posting on the net is that you can be completely ignorant but claim to be, and come across as, the font of all knowledge on anything.
Pappinbarra Fox said:
Well, yeah, but could you write a program to do it? If the program was designed with the three Asimov laws in mind, would it not point out spelling mistakes in posted articles to try not to hurt the feelings of the poster?
I think there is something fundamentally different between the intelligence of an ant and for that matter a human, than that of a robot or computer. I think the gateway to this realm is the philosophical concept of ‘things as they are in themselves’.
The now emeritus Rous Ball Professor of Mathematics, Roger Penrose discussed this extensively in his popular books, albeit from a different angle; namely that science and mathematics are incomplete. It all amounts to the same thing really and slides into religious questions, but none that an orthodox practitioner would find Truth. Yet it is quite surprising to find athiests and thiests saying the same thing, and knowing that churches would thoroughly excommunicate the former over a vapid verbal formulation.
Anyway I am glad some people much more intelligent than I, have got computer networks working, and probably because I am so unimportant I have no problems with security. I treat my computer kindly, and listen carefully for what it is really saying to me. This is true, it is also weird – goodnight.
OK, tin-boy:
Tea is made with BOILING water poured on top of the leaves or bag, not boiled water. If it’s a bag, do NOT pour the milk in until it has been jiggled for at least TWO MINUTES.
Oh yeah, don’t be evil, three laws, all that stuff. But BOILING water.