How do you stop people accessing data they shouldn't?
I used to work in a call centre for a Very Big Company. Every week, without exception, we'd get a bunch of new starters to train. And every week, without exception, a newbie would be fired after looking up a famous person's data.
This was in the days before GDPR. There was a lot less general awareness of data protection issues. It didn't matter how often will drilled it into trainees' heads - someone would breach privacy within 5 minutes of getting on the system.
It seemed to be an almost irresistible honey-pot. Imagine being able to look up the REDACTED bill of a pop-star. Or your neighbour. Or your no-good cheating ex. Or... you get the idea. It's no wonder that countless people felt compelled to risk their own jobs.
Training had some effect. We repeatedly warned what would happen if you were caught. We patiently explained how every interaction was logged. But that wasn't enough. Some people forgot. Or thought they could outsmart us. Or had a human moment of fragility and slipped.
Warning messages helped for a little while. Forcing people to read a dire warning of the consequence of a policy breach. But, after about the 2nd time of seeing them, people's eyes glazed over. Forcing them to type "I understand and agree" didn't seem to help. And, of course, anything which led to longer call times meant lower customer satisfaction.
For the same reason, two-person approval was also a non-starter. Before accessing an account, a manager had to approve the request. It isn't hard to distract an over-worked supervisor and, in some cases, it didn't take much to bribe them.
We tried tying access to an incoming phone call. Only present the account if the call came from the caller's registered phone number. But people have multiple numbers. Or are at the office. Or withheld their number. So that didn't help much.
One suggestion was to only allow viewing an account during a phone call. It was a good idea, with one minor flaw. Customers would quite often finish one transaction and then say "Can you please help me with my other account". Telling them to call back was a non-starter, so users had to be able to access multiple accounts per call. A simple "please hold" and a ne'er-do-well could spoof access to a different account.
Passwords nearly worked. "Please can I have the 3rd, 7th, and 19th character from your password?" The system presented a box for the call centre worker to type in the characters. But most callers hadn't set up a password. And those that had couldn't remember it. And, as call times increased, we were forced to scrap it.
Two-factor authentication was supposed to be the saviour. We'd text or email a code to a caller and that would unlock access to their account. But people don't always have access to their mobile, or have a good signal, or - in those days - have their email in front of them. Even if they did, callers got increasingly frustrated at the baroque security barriers.
Sure, we recorded every call for "training and monitoring purposes" but there was such a huge volume of calls that the chances of finding a transgression was vanishingly low - and post-hoc processes don't stop data from leaking.
As a stop-gap measure we put a flag on the account of famous people. Access to their accounts was forbidden unless authorised by two managers. They were a small proportion of callers, so that process didn't overwhelm staff. But... How do you know if someone is famous? There are lots of Maurice Micklewhites - only one of which is Michael Caine. And, of course, it's unlikely that your ex-wife's new boyfriend is on that list.
Every single measure that we put in place to protect people ended up alienating customers and/or slowing down workers. Every roadblock needed an exception. And those exceptions could be abused.
Of course, some people didn't care about being fired once they had acquired their target's details.
So, what did we do in the end?
I'd love to tell you that we found some magic technological solution which fixed all our problems. Some really cool cryptographic key exchange where customers kept their data in self-hosted pods and access was written to an append-only P2P Merkle-Tree. Perhaps infallible AI which noticed suspicious access patterns?
Instead, we went with a slightly dystopian set of posters and Post-It notes with a glowing pair of eyes printed on them. There's some science to this - people tend to behave more honestly when they think they are being watched.
It wasn't foolproof and, eventually, the eyes lost their power.
Nowadays customers are a lot more accepting of data protection needs. Being asked for a password is a necessary chore. But every so often there's a story about the family of someone with dementia being locked out of an account or a bereaved widow losing access because she forgot her PIN.
There's no right answer here. We either accept that people will occasionally be locked out of their accounts, or we accept that nefarious actors might have access to our information. Tracking access and punishing wrongdoers acts as a deterrent - but won't stop someone sufficiently reckless or determined.
How would you solve this problem?
David O'Brien said on mastodon.social:
That’s a wicked problem, @Edent
Seth Mos said on noc.social:
@iamdavidobrien @Edent I would go as far as to say it is a human problem.
David O'Brien said on mastodon.social:
I’d suggest exit interviews with those dismissed for this behaviour, with user researchers probing to understand motivations.
Then circle the discovery around somehow to screen for it at recruitment, as far as is possible.
No idea if that would be effective. But @databeestje is right it’s a human behaviour problem, not a tech problem. So tech solutions fail. You need psychology and sociology here, not code.
@Edent
Phil Ashby: :marmite:, UBI now said on mastodon.me.uk:
@iamdavidobrien @databeestje @Edent I'd agree with this, and add that many organisations keep too much personal data (like addresses, mobile numbers, etc) for no good reason, significantly increasing the damage possible. In my previous employ at an identity intelligence org we processed terrifying amounts of PII, and consequently anyone who could access some/all of it (it was generally partitioned for various purposes) needed to be properly vetted before employment. Abuse was a criminal offence.
Jack says:
I've worked in places where the attitude to data security was so lax, the test/train system used a stale copy of the production system's data. And it contained much (much) more sensitive and personal data than just addresses and phone numbers. When I tried to get this fixed, I was told it was deliberate - the several hundred people with access to test/train were "professionals who can be trusted" and it's "convenient to have a copy of the live data to test service desk ticket resolutions on".
gadgetoid said on fosstodon.org:
@Edent I’d have fixed whatever byzantine nonsense policies and UX failing warranted so much telephone support and fired the whole call centre but I think telephones are the work of evil devil people and should be illegal anyway 🤣
Efi (nap pet) 🐱💤 said on chitter.xyz:
@Edent my question is "why does anyone with credentials have access to the entire set of data?" and the reasoning is, whatever data you have surely can somehow be sectioned in smaller groups, say, by state?, then you make sure that the workers have access only to states they have nothing to do withyou don't even need to tell them, they are assigned to such and such, so they only get calls from those and only access data from thoseif there is a need for access to a forbidden area, you can dispatch it to another worker that can or a managereither way bribes are always gonna be there, but those can be dealt with in other waysmy point is allowing everyone to access anything with the same credentials is a base flaw of any large modern organization
More comments on Mastodon.