A Turing Test For Self-Driving Cars
Imagine that you are sat, blindfolded, in the back of a taxi. How could you tell if you were being driven by a human or an autonomous vehicle?
If you've not read Alan Turing's The Imitation Game, I can highly recommend it. The paper is short, well written, and contains a whole world of ideas.
This is where we get the concept of the Turing Test. Can a human be fooled into thinking that the computer they are communicating with is a human? It is often assumed that the communication must be typed, but I don't think that's necessarily the case.
In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alan Turing (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433
In this part of the paper, Turing is writing about what the existing imitation game is. He says nothing about how the game should be played with a computer.
Turing was aware of the current limits of technology.
In the early 1950s speech recognition was in its infancy, so it is perhaps understandable that the idea of talking to a computer was infeasible.
Is there any reason why, today, the questions and answers couldn't be spoken aloud?
Am I being heard by a human?
It can't have escaped your attention that listening devices are popping up in all sorts of places. From telephone banking to home assistants, computers are listening to us and (mostly) understanding what we are saying.
I've played with cheap crowd-funded computers which can hear and transcribe text instantly.
We're now safely at a point where a computer can understand clearly spoken text as accurately as a human.
Is this a human voice?
The history of speech synthesis goes back centuries - although the results were little more than the whinges of cacophonous bagpipes.
But by the early 1940s, electronic speech was a reality:
(There is a longer demonstration available.)
Today, the latest artificial intelligence research from Google has produced WaveNet - which claims to produce the most lifelike computer generated speech.
It's not exactly compelling, but it is getting there.
Aside from determining if the voice is generated organically, there is another aspect - timing and intonation. In 2011, the film critic, Roger Ebert proposed his "Ebert Test"
If the computer can successfully tell a joke, and do the timing and delivery, as well as Henny Youngman, then that’s the voice I want
You can listen to Ebert's synthesised voice in his TED talk.
Could you sit in the back of a taxi and make conversation with a computer? At present, it would probably understand you. And you would understand it easily.
Is This Car Being Driven By A Human?
As Turing's paper goes on, he talks about the various rules that a computer may need to be aware of.
By "rules of conduct" I mean precepts such as "Stop if you see red lights," on which one can act, and of which one can be conscious.
An AI might provider a superior ride to a human. Should we build our AIs to imitate our flaws?
Does this smell like a human?
What about our non-obvious senses? Back in the 1950s, Turing wrote:
No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a "thinking machine" more human by dressing it up in such artificial flesh.
Nowadays, the quest for artificial flesh is driven by lascivious desires - but again it raises an interesting point. Would you be able to tell that your taxi driver was a robot if they smelled of sweat rather than WD-40? If they got goosebumps during a breeze?
At the moment, our attempts to flesh out robots hit the "uncanny valley" - a visceral reaction to something which isn't quite human enough to be convincing.
A good example of the uncanny valley is this almost-human-yet-really-creepy robot of the actor Scarlett Johansson.
It's is hard to identify what precisely makes the robot look so wrong. The pose is a little off, the skin tone is not quite right, the eyes are... I can't put my finger on it.
To experience the full terror, you can watch the video about the robot's construction. Warning you may find this disturbing. Also it contains (robot) nudity which... I dunno... feels like it ought to be NSFW.
Is this car as ethical as a human
The brakes on the car fail while you are driving down a mountain. The driver can either crash into a bus of school children - killing them but saving you, or can drive over the side of the mountain killing you but saving the children. What should your driver do?
The "trolley problem" is a classic of moral philosophy. How does a human decide who lives or dies? Can we teach a robot to have ethics? That's what MIT are trying to understand.
Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.
We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people. Moral Machine - MIT
Should an autonomous car prioritise the life of its owner? Would you expect a human taxi driver to do so?
Is "Being Human" The Goal?
For some tasks - humans rule. Conversation, pattern recognition, falling in love, creating art.
But humans suck at a lot of things. We're fragile, imprecise, impatient, we smell, we cut corners, and we carry diseases, our morals are dubious, and we tire easily.
I'd argue that in many uses of artificial intelligence, Turing's test sets an artificially narrow limit. Do I want an autonomous vehicle which is indistinguishable from a crabby, distracted, unintelligible, malodorous, and immoral taxi driver?
Achieving parity with humans seems like a low bar for machines.