I think "Law 3.0" is OK, actually
I recently came across a post about "The Energy Bill 2023 and the Fusion of Technology and Law - We are going to be governed under 'Law 3.0', and we won't like it one little bit". It is a superficial look at the "horrors" of being governed by technical measures.
It starts off reasonably enough by describing the evolution of our legal system:
- Law 1.0 says "Thou shalt not kill".
- Law 2.0 says "Thou shalt not pollute. But, rather than specific legislation, we'll spin up a body to do the tedious work of enacting our policy".
- Law 3.0 says "Thou shalt not drive over the speed limit. And we'll fit all cars with a chip that prevents them doing so".
I think that's fair enough assessment. Is "Law 3.0" a good thing or a bad thing? Perhaps we can have a discussion about the limits of technology and political philosophy? No, we just get this "argument":
The tyrannical implications of such a mode of governance are so obvious that it really ought to go without saying.
Ummm.... no? Perhaps we could discuss these supposed tyrannical implications?
The author is terrified that the Government is going to stop him running his dishwasher. You see, the Energy Bill says that technical experts can send a signal to smart appliances asking them to reduce their electricity use at certain times.
That's it. That's the horror he is railing against.
Pollution a bit high? Send a signal to ask your freezer to reduce its cooling temperature. Electricity prices going to spike in an hour? Tell people's cars to start charging now, but to throttle back later. That sort of thing. You know, save you a bit of money, reduce pollution, stabilise the energy supply. Terrifying...
Now, there might be a dystopian use of this. Perhaps the state could command all TVs to turn off when the opposition's political adverts are on. Perhaps they could turn off the car chargers of known protestors - thus preventing them from attending a demonstration. Maybe they'd turn off everyone's freezers in order to boost Tesco's profits?
This leads to some interesting questions. What sort of safeguards should we have? What level of control do people want? Who chooses the experts? Who secures the system? What similar problems have happened before? What are the positive ways this could be used?
But, nope, the post doesn't discuss that. It just continually reiterates a pathological fear of "technical experts" telling people how to behave. He wraps it up in a vague coat of morality - saying that we should be allowed to choose to break laws and face the consequences.
But, what are we actually talking about here?
A Thought Experiment
Let's say a power station fails unexpectedly in the middle of winter. There are several options available to us.
- Wait until Parliament can reconvene to debate and pass a law which limits people to x kWh of electricity per day with a maximum of y kW at any moment.
- Let people suffer rolling blackouts / brownouts as the energy supply struggles to keep up with demand.
- Have a team of technical experts send signals to people's washing machines asking them to only switch on when there's surplus power.
Quite obviously (1) is impractical. The lack of speed and expertise is one of the (many) reasons Law 1.0 doesn't work in large complex systems which require a swift reaction.
And (2) is the sort of self-sufficient Libertarian nonsense which imagines a hellscape for everyone except themselves. Great! You can choose not to follow the law and let everyone else suffer the consequences.
And (3) is... boringly pragmatic. I guess with the slight risk that it might be abused to... what? Deny people their constitutional right to run a high power vacuum cleaner whenever they want?
Morality
The article makes this moral argument:
The speed limit does not compel us: we can choose to abide by it, or not. And this, most crucially of all, means that we have moral agency. We can choose to do right or wrong. [...] in its way, [Law 3.0] is the worst affront to the dignity of man out of them all, because it destroys the very conditions of moral agency. I reiterate: if one does not have the freedom to choose, because one is compelled to act morally, then one’s moral conduct is not really moral at all.
I don't really get that. We ban guns so that people can't choose to wave them about recklessly - because impinging on your freedom is better than clearing up corpses.
Rather more prosaically, we ban the sale of inefficient domestic appliances. Yes, the experts are being mean by forcing you not to make a moral decision about whether to waste electricity. Boo-fucking-hoo.
But, if this moral agency is so important, why isn't it available to "the experts"? Why shouldn't they be allowed to choose to take a bribe from a washing line manufacturer to switch off the nation's tumble-dryers? They can suffer the consequences of being caught, tried, and punished.
Law 1.0 - which the author is so fond of - would do that.
Or, we could use Law 3.0 to implement a technical measure which says such a signal can never be sent unless 4 our of 5 experts agree to it.
Civic Hygiene
A decade ago, I wrote up my thoughts on Civic Hygiene.
Civic hygiene isn't about saying we distrust our current government - it's about not trusting the next government.
I still stand by that. We should make it hard or impossible for a corrupt entity to abuse the power it is entrusted with. But that doesn't mean giving them no power.
In a democratic society we accept that we sometimes have to do things aren't in our direct personal interests in order to keep society functioning. Sometimes we can choose whether or not to obey (e.g. by breaking the speed limit) other times the state restricts us (by banning the sale of poisons).
If we don't make the transition to smarter and more responsive energy consumption, then we risk grid stability, more pollution, and higher energy costs. That damages all of us.
We should embrace new ways of organising ourselves. And we should embrace technological limitations which protect the majority. And those limitations must be safeguarded.
I don't know whether this law is well written, or whether there are adequate safeguards, or whether abusers of its powers can be punished. But I do know that this moral pontificating doesn't even begin to address the practical issues.
Alex says:
My main contention with Law 3.0 is that software is terribly strict. Law is notorious for its verbosity and yet it fails to cover absolutely all cases (I believe there's a reason Common Law is still in practice and laziness to switch to Civil Law system is probably not it).
Let's take a speed limit example. How do you put in software a reason to break the limit? Say, there's an emergency, your wife is about to deliver a baby or someone's gotten stubbed. Is it a big deal to break speed limit or run a few red lights on empty intersections in the middle of a night? How would software in your car know it should allow that?
We can hope our cars would be completely autonomous, fully connected to coordinate with other cars on the road and driven by an AGI so you could reason with it and explain what the emergency is. But how would that work before we get all that?
We think of software as super flexible and easy to change. And it is in comparison, say, to replacing hard metal parts in a machine. But it's extremely strict until it's changed.
Another matter is reliability. All software is buggy. And the more it's changed the buggier it gets. Imagine once in a blue moon speed limit is not getting enforced because some value overflowed and the limit effectively gets larger than the speed of light. Or on February 29th for whatever reason instead of enforcing maximum speed the limit flips to be the minimum.
I'm not even talking about security of OTA settings changing. You can refer to @internetofshit for an abridged list of delights.
I mean, I sure want my boiler to heat the water when it's a little cheaper rather than in the peak hours. And hive mind of self-driving cars sounds wonderful. But somehow all these delights keep staying in the future, in the promises and we're keeping getting only more surveillance, more bugs, more vulnerabilities
@edent says:
I think there are three small mistakes in your reasoning.
Firstly - speeding because you're about to give birth is still speeding. It still puts you and other road users at risk. In most jurisdictions it doesn't get you off a ticket. In the UK, you aren't allowed to cross a red light even if there's an emergency services vehicle trying to get past.
Similarly, just because you can't see anyone on an "empty" road, doesn't mean they aren't there. So, yes, it is a big deal to break traffic laws - despite what most drivers think.
I can easily imagine a car with a software-enforced speed limit. Need to go faster to escape a tsunami? Press the big emergency over-ride button. That can send an alert somewhere and you can justify your decision later. Perhaps the courts are sympathetic to your excuse, perhaps not. It stops people "casually" breaking the law and allows nuance if necessary.
Secondly, the law also unreliable - it has bugs, edge-cases, and loopholes. It can take years to adjust. Software can be tweaked OTA.
The law also fails in spectacular ways - look at the abolition of Imprisonment for Public Protection sentence (IPP). That's caused all sorts of weird issues. So we're already living in a system that sometimes goes wrong.
Finally - this isn't the future, this is now! My electricity provider already sends a signal to my battery telling it the prices for the next 24 hours - and my battery can decide whether to charge or not. If I had an electric car it could use V2G (Vehicle To Grid) to help balance out my energy usage.
Will their be vulnerabilities in a new system? Yes. But that doesn't mean we should forgo all its benefits.
Alex says:
I must've came across not quite right. I wasn't saying that an emergency should get you off a ticket. I was trying to say that for the price of a traffic ticket and higher risk for others on an empty road someone might save a life.
Incompleteness of the Law (both in Civil Law system, and especially in Common Law) is a feature. It allows for action in rare unexpected circumstances. By no mean it removes responsibility and consequences but there's at least room for that.
In no way I'm advocating for casually breaking laws. I'm rather against rigidity of software. It's going to be a terrible future unless every system would have a big red button "I understand I'm about to break law and there will be consequences but I really need to do that right now".
An OTA update is faster than the usual legislative/legal process but it's still going to happen after your emergency if at all. Look at the response times to vulnerabilities. And still it's probably going to happen as a result of the aforementioned process. That is the software is only the enforcer, not the law itself.
Now, as you mentioned the law 1.0 is not perfect. And that's another argument in favour of it, actually. Say, it a certain ethnic group becomes suddenly illegal. Every person of that group has until midnight to leave the country or be executed. How would you like your smart home giving your friends to the authorities? Or would you rather like an option to break this particular law despite possible consequences?
Another aspect of software rigidity as the law is how would you define breaking the law? If the law software is not tempered with but I still managed to do something not intended did I break the law? Say, I somehow deduced that for whatever reason Konami code on the blinker switch or playing a specific old country song on the infotainment system lets me bypass speed limit. Am I breaking the law? Again, for all the niceties of software law look at how great smart contracts are doing.
Yet another aspect of software law is synchronization of updates. Law 1.0 usually has a date attached when a specific clause goes into (and out of) effect. If my car has wifi broken and stops getting updates. Am I breaking the law? Or I'm fine to live by the version I have on my stuff? Are we OK with possibly thousands of different version combinations installed all over the place? Presumably, not everything can be codified (and enforced) by software how do we make two systems work together?
@edent says:
True. But all of these can be applied to Law 1.0. If you find a clever loophole in the law which means you don't pay tax - guess what! HMRC will still come after you, and probably win. If you've found an exploit that lets you speed - you can still be prosecuted.
You're also arguing against yourself when you say an OTA update might be too slow for your emergency, but it might be too quick to be used against you. Well, the same is true of lots of law. The state can already sanction your bank accounts pretty quickly if you're a Russian oligarch. And yet, people complain that it is too slow to catch all of them.
The answer isn't to throw up our hands and declare it too hard a problem. We have human courts to work out the difficult cases ("My car was still on the old firmware!" or "My freezer has a software bug which uses too much energy on a leap day"") but for the majority of the time we let the guard-rails do their job.
And, finally, I agree with you - this is no place for "smart contracts"!
Alex says:
Let me suggest an alternative. Let's keep Law 1.0 be the law. And let's implement all the niceties and conveniences in the software without making it the law.
Coordinated power consumption sure is convenient. I'd love that in all my appliances. But let me pay surge prices if I decide my situation makes it worth.
Likewise, I'd love an autopilot in my car not to randomly run red lights or speed just because. But let me drive manually and chose whatever speed or override limits taking all the responsibility, of course.
Like Jeremy said, let's make good thing easy and bad things hard. One thing I'm asking for is let's make bad things hard or even very hard but not impossible. Because a rare thing is universally bad in absolutely all contexts and sometimes even considering the consequences of breaking the law it's still might be useful to be able to do just that.
@edent says:
I disagree on two counts. Surge pricing just means that rich people are insulated from its effects. If we think that, for example, energy use should be reduced at a specific time, that should apply to everyone equally.
Secondly, I come from a background where we literally do make bad things impossible. As I said, we use code to stop you choosing a compromised password. Or to prevent people from flooding the radiowaves with excess noise.
We don't say "well, we'll make it a bit inconvenient but if people really want to..." We do that because we have to protect individuals, systems, and society from harm.
Frankly, I don't trust most drivers to behave responsibly. If it were only your life you were risking, perhaps I'd be more comfortable - but it isn't. And I don't want to live in a world where any drunk driver can make the unilateral decision to endanger people because we were a bit worried about a hypothetical need for a "responsible" driver to speed.
Jeremy GH says:
My take on this: Law is - fundamentally - about getting people to the 'right thing' and not do the 'wrong thing', for which they define 'right' and 'wrong', and set penalties for doing 'wrong'. Both Law 1.0 and Law 2.0 do this, in their different ways: but abiding with the law is a matter of free will - a choice to do 'right'. But choosing 'wrong' is always possible - and implies choosing the consequences. What Law 3.0 is about is removing the ability to choose 'wrong' - to remove free will; to make doing 'wrong' not only impossible or the wrong way to think, but - ultimately and literally - unthinkable. And that I find deeply disturbing.
While technical means of enforcing limits may make sense, and - subject to safeguards and 'civic hygiene' - might even be mandated, they would seem to me to be a first stage on a slippery slope: how far should we proceed. Would mandatory brain washing ever be considered acceptable?
And on the subject of 'civic hygiene' and not trusting the next government (something overlooked by both parties, in how they have legislated), it is fundamental to the British constitution that no government is bound by predecessors, nor can it bind its successors - rather a mixed blessing.
@edent says:
The law may try to encourage people to do the right thing - but there are literally more laws than you could ever possibly read or understand. No one can know all of them, let alone follow them. That's often given as an argument against more laws - it gives the state a pretext to arrest anyone who didn't know that X was a crime.
I'm sympathetic to your argument. But we've already collectively agreed that there are a bunch of things that is functionally impossible to do - because the risk of harm is so high. You can't buy guns, poisons, radioactive material, etc. We accept that having strict technical controls on those items are necessary.
You can't board a plane carrying a knife - and we have metal detectors / body scanners to stop people exercising their free will.
New tube lines have barriers between the platform and the train. That make it impossible to trespass on the line.
In the private sector, we mandate strong passwords. Why shouldn't someone be allowed to choose
password123
for their bank's login? Why can't they have1234
as their ATM PIN?So, we're already on this "slippery slope".
James says:
It might be nice if laws -- or at least perhaps the frequently-applied and/or controversial ones -- had to have associated metrics with them before they're approved for merge into the legal code.
Then we could get into longer-term nitpicking about what are fair and desirable metrics (because, ideally, they'd become a model of what the population more-or-less agrees that they want) and also we could git blame/bisect to see the legal changes that produced effective results, and view the commit history to discover the pressures or politics that resulted in the enactment of problematic changes.
To some extent I think this probably already exists -- the forking of British legal code from the EU is an example -- I'm just not aware of the tooling and processes that are used to measure results, performs diffs, cherry-pick or revert, and so on.
Tom Morris said on mastodon.social:
@Edent I mean, energy policy aside, delegated powers are still a damn mess.
Ivan says:
In my opinion, the technical side of the problem is much scarier than the moral one. The people trusted with the power to turn off washing machines probably will not abuse it to the point of causing too much damage (except someone will not be able to do the laundry on time, once), but ransomware will become much more interesting. 2084: "Who controls the washing machines of the nation, controls its future."
Will the people implementing the system be able to allocate Space Program levels of attention to its safety and security? How do we ensure it's not like another IoT system to fail because of insufficient key strength / Turing-complete on-wire format fed to vulnerable parser / logic errors in authentication flow / Tech Company A deciding to stop supporting a critical dependency? I don't think it's impossible, but it's a hard programming and managerial problem.
Rob says:
The whole argument seems backwards. Surely this is about giving “technical experts” the authority to send a signal to ask your appliance reduce energy consumption. But you can chose not to comply - you don’t have to connect your fridge to the internet.
All this discussion about speeding and do on is just a “slippery slope” argument. Any law can be reduced to such meaningless level if we want to but how does it help us?
More comments on Mastodon.