I recently read this interesting, and distressing, story of a man who was drugged and robbed. A form of crime which has been going on for centuries. But the 21st Century twist is that the thieves forced him to transfer large sums of money via his phone's banking apps.
While under the influence, the victim used his usernames, passwords, PINs, and biometrics to send money to the criminal's accounts.
Is there a "technological" way to stop this? His banks initially refused to refund the stolen money. Only once the press stepped in did they relent. One bank, Revolut, said:
This was an unusual case where the payments were authorised by the customer but, as is now clear, without his consent.
From the bank's point of view, the victim presented the classic trifecta:
- Something you have - the logged in banking app and, possibly, bank cards
- Something you know - the passwords, PINs, and memorable phrases
- Something you are - the biometric authentication via fingerprint or face unlock
The bank is correct - the user did authorise these transactions. They had the authority to tell the bank to transfer the money, they presented the right credentials, and responded correctly to the challenges. The bank was satisfied that the user was who they claimed to be. This transaction would not have looked any different from other legitimate transactions.
But, of course, you cannot be forced into meaningful consent. If someone is threatening you, then you are not consenting; you are being coerced. But how can a bank - or anyone - know whether you legitimately want to proceed with a transaction?
Perhaps the bank could have detected that this was "unusual" activity. Does this user normally transfer thousands of pounds late at night? But how many of us have got annoyed at a bank suddenly thinking that our £3.50 lunch is a precursor to serious financial crime and denied the transaction? We get grumpy at banks who stop us doing what we want with our cash. So there's an incentive for banks not to go overboard with fraud detection.
And, in this case, what could they do? Call the user and see if they really wanted to make the transaction? Can a bank reasonably tell if someone is drunk or drugged? Do they have the right to prevent incapacitated people making dangerous decisions?
Perhaps, to detect coercion, the bank could video call you, and check there was no one in the room with you? Or they could use voice stress analysis and heart-rate monitoring to check that you weren't being pressured into something.
That all seems a bit overboard and annoying if you're legitimately trying to make a large transaction. Would users put up with that if they knew it was being done to keep them safe?
Ultimately, this is why we have banks, regulators and insurance. The banks can pause and reverse transactions to help recover stolen money. Regulators can tell banks how they have to treat vulnerable or victimised customers. Insurance can cover the losses and provide incentives to improve security.
This is, as I said, an upsetting episode. But it stands in stark contrast to the terrifying and unregulated world of cryptocurrency-powered decentralised finance. Under DeFi, a user being coerced can never hope to recover their money. Smart Contracts cannot distinguish between authorisation and consent. And that is its fatal flaw for consumers.