"I, Robot" - the 3 laws considered harmful

What happens when a robot begins to question its creators? What would be the consequences of creating a robot with a sense of humour? Or the ability to lie? How do we truly tell the difference between man and machine? In "I, Robot", Asimov sets out the Three Laws of Robotics – designed to protect humans from their robotic creations – and pushes them to their limits and beyond.
After attending a lecture on the ethics of self-driving cars, I decided to re-read Asimov's "I, Robot".
Any discussion of robots killing people inevitably returns to the supposed wisdom of the "3 laws of robotics".
I argue that the "3 laws" should be considered harmful.
Let's remind ourselves of the laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The first thing to note is that these are a literary device - not a technical device. Killer robots are boring sci-fi. The interesting sci-fi is looking at "good" robots which inexplicably go bad. "I, Robot" is essentially a collection of entertaining logic puzzles. Æsthetics are not a good basis on which to build an algorithm for behaviour.
The second thing to note is that the laws are vague. What do we mean by "harm"? This is the lynchpin of one of the stories where (spoilers!) a telepathic robot avoids causing psychological pain.
Most people can't deal with complexity. Here's a common complaint:
Archimedes’ Principle: 67 words The Ten Commandments: 179 words The Gettysburg Address: 286 words The Declaration of Independence: 1,300 words The US government regulations on the sale of cabbage: 26,911 words As circulated by email from everyone's "kooky" relative
The claim above is utter nonsense - but laws are complex. The Ten Commandments may be short - but they cause a huge amount of ambiguity. What does "Thou shalt not kill" cover? Gallons of ink has been spilled interpreting those four words. Ironically, blood has been shed trying to resolve those ambiguities.
Would it have been helpful to have, from the start, 30,000 words explaining exactly what each rule meant?
Thirdly, these aren't natural laws. Too many people - including scientists - seem to think that these are hard-wired into robots! I've had conversations with otherwise rational people who haven't quite grasped that these aren't baked into the fabric of existence.
The "laws" are a harmful meme. That's testament to the power of Asimov's writing. His stories have infected every discussion of robotics and ethics. Even though we know it is fiction, something deep inside us recognises them as a necessary part of creating "acceptable" robots.
The 3 laws are brilliant for story-telling, but lousy for AI policy making.
Verdict |
---|
- Read on Amazon Kindle
- Audiobook and ePub from Kobo
- Paper book from Hive
- Listen on Audible
- Publisher's details
- Borrow from your local library
- ISBN: 9780194230698
Alex Gibson says:
Mark Frellips says:
Gray says:
John H says:
Andy says: