The Invisible Nightmare
It rather sounds like the title of a Doctor Who story from the late 1970s, doesn't it?
But it's a term that I think we're going to be hearing a lot of in the future.
Jake Levine recently wrote an excellent post on apps which don't require any interaction. It's not quite as crazy as it sounds - the interfaceless application - but refers to a class of program where the only interaction is in the act of being notified.
The examples given are fairly obvious in retrospect - an app whose only function is to play a sound when it's about to rain, an exercise app which notifies you once per day about how many calories you've used.
We see these a lot with SMS alerts - e.g. a notification to say your credit card has been used. Automated emails from services which tell us what our website stats are for the month.
There is a risk that we get overwhelmed by these notifications. Not just through Fear Of Missing Out - but just through the sheer volume. I'm sure we've all gone through periods of unsubscribing from every bloody newsletter which has flooded our inbox.
That's a fairly visible nightmare. It's what happens next which scares me.
Invisible Apps
Are there apps which you use, silently making changes, and you're not even aware of them? Of course!
On a prosaic level, there's the app which automatically adjusts your screen's brightness. The app monitors the ambient light and silently adjusts the screen so it is readable. It never pops up and says "Hey! I just changed the screen - keep this brightness? Y/N" It just plods along.
We are surrounded by devices passively collecting information, and then making autonomous decisions.
Can we ever know why those decisions are made? Can we understand and - more importantly - correct their mistakes?
Google Now scans my inbox and occasionally alerts me when it thinks I'm about to miss a flight. Handy - but creepy. It would be trivial for it to see my calendar for tomorrow, see that there's bad traffic, and autonomously decide to set my alarm clock app to wake me half-an-hour earlier than usual.
That's... good. Probably. In an ideal world where Google Now is 100% confident that I'm not taking the bus, or had decided to cancel, or any one of a hundred other variables that may impact the decision making process.
Your phone could look at your temperature, your hormone levels, shopping habits and correctly conclude you're pregnant. Is it acceptable if your phone refuses to let you order a glass of wine? What if it automatically tells your friends to cancel your joint sky-diving lesson?
As we move more of our life into digital services, it's temping to think of our phone as a latter day butler. A silent presence, looking over our shoulder, getting us out of scrapes and - somehow - always knowing when to call us a taxi home.
It's possible that the level of artificial intelligence needed to do this in a useful way may arise - I'm not confident of seeing that any time soon. Having a butler that is continuously intervening - and is often wrong about its interventions - would be supremely distressing.
A Malicious Siri
Do we need protecting from autonomous robots making important decisions for us?
Take, for example, your credit score. There are thousands of software processes which are automatically looking at your spending, your saving, how quickly you pay back debt, and a hundred other facets of your life. All of which builds up to create a credit score - essentially a quantified risk profile. If your score is too low - no mortgage for you.
What are the systems which decide these scores? Are they accurate? Do they have bugs? Are they the same systems which gave a clean bill of health to Lehman Brothers?
Imagine your phone realises that it's in a noisy environment - so switches from silent mode to loud mode. Useful, unless you're in a cinema and receive a call during the emotional dénouement.
Worse, could our "faithful companions" be actively working against us?
Imagine Amazon released "Percy The Personal Shopper". A smart software agent which would do your shopping for you - picking out the latest fashions in the correct sizes, recommending books you'll just love! What could be better? (Assuming he is smart enough to understand your sarcasm and doesn't automatically order you 3 tons of potatoes.)
But, of course, Percy doesn't work for you. He works for Amazon. He isn't suggesting clothes you'll like - he's suggesting clothes which are taking up space in Amazon's warehouse, or perhaps those which have a high profit margin.
Your exercise scheduler (Free! Sponsored by Nike!) automatically arranges tennis matches for you against better players who just happen to have a Nike racket.
Your calendar automatically sends your grandmother flowers on her birthday. It doesn't know she died and you just keep the reminder in your calendar as a memento.
Taking Control
Automatic decision making is a fascinating branch of artificial intelligence. It won't be long before Google Now and Siri really do start taking actions for us in the background rather than simply notifying us.
How do we stop these invisible systems from tormenting us with their ill-informed - but well intended - actions?
One thought on “The Invisible Nightmare”
Trackbacks and Pingbacks
Rationale - Social Interface Agents:
[…] Studies, 52(1), 1-22. Eden, T. (2013). The invisible Nightmare. [Blog post]. Retrieved from: http://shkspr.mobi/blog/2013/07/the-invisible-nightmare/ [Accessed: 9 May 2014]. Franklin, S., & Graesser, A. (1997). Is it an Agent, or just a […]