Skip to main contentSkip to navigationSkip to navigation
Thinking robot
Regulating computers in the way that we regulate other machines will be no more effective at preventing undesirable robotic outcomes than the copyright mandates of the past 20 years. Photograph: Blutgruppe/ Blutgruppe/Corbis Photograph: Blutgruppe/ Blutgruppe/Corbis
Regulating computers in the way that we regulate other machines will be no more effective at preventing undesirable robotic outcomes than the copyright mandates of the past 20 years. Photograph: Blutgruppe/ Blutgruppe/Corbis Photograph: Blutgruppe/ Blutgruppe/Corbis

Why it is not possible to regulate robots

This article is more than 10 years old

We regulate machines, from drills to defibrillators. What distinguishes a power-drill from a robot-drill? A computer driving it

There's an old joke about the sciences: biology is just applied chemistry, chemistry is just applied physics, and physics is just applied maths. It's really a neat little quip about essentialism and reductionism. While it's true that biology can be accurately described as "applied chemistry," treating living things as alive – and not as a set of chemical reactions no different in principle from making a cup of cocoa or extracting a pigment to use in housepaint – has undeniable utility.

But we draw boundaries. While there are disciplines that straddle biology and chemistry and treat organisms as though the most important thing about them is neither their chemical reactions nor the fact that they are living, we acknowledge that there are two great poles between which these gradations shade. There are a lot of things that we can point to and say, "that's chemistry" and there's a lot of things we can point to and say, "that's biology".

I've been thinking about robots this week, and whether they are a pole – like biology, chemistry, physics and maths – or whether they are an in-between thing, like biochemistry or theoretical physics.

Many classics of science fiction have little trouble with these distinctions. Robert A Heinlein quite happily depicts "brains" – vast computers occupying large complexes, capable of having agency and personality and will – and "robots," which are mostly humanoid machines that drive themselves according to their own set of rules, of varying degrees of subtlety and complexity. The brains can take over the robots and manipulate them – use them as wireless peripherals, the way that your computer might instruct your printer to run off a page – but it's pretty clear that the brain's essential embodiment is whatever's under that bunker, and not the bits of world-manipulating gadgetry that the brain can command.

Three laws of robotics

Isaac Asimov famously gave us robots with "positronic brains" that obeyed the "three laws of robotics", which revolved around the subservient and protective nature of robots. And Asimov has especially long-lived robots whose brains are moved from one body to another over time, but my reading of Asimov made it clear to me that a "positronic brain" had some inseparable connection to a body, preferably an anthropomorphic one. There weren't a lot of positronic refrigerators or clock-radios or fart-machines in Asimov's future.

For all that Asimov continues to have enormous cachet and resonance in modern discussions of robots, I think that Heinlein's idea of a brain/body distinction holds up better than Asimov's. An Asimovian robot always feels like it is something more than a computer in a fancy, mobile case. Heinlein, by contrast, at least locates the most salient fact of the robot in the systems that parse and execute instructions, not the peripherals that receive commands from these systems.

But even Heinlein's robots quickly break down as a category. A Heinleinian "brain" that drives a humanoid robot around is a brain and not a robot. Even if the humanoid robot were to pick up the brain and carry it around, it would still be a dumb peripheral that is being driven by a "brain." But once that robot opens up its chest cavity and securely affixes the brain to its internal structure and closes up the door and bolts it shut, it is now a "robot" and considered to have the agency and will that we've been imbuing our ambulatory machines with since the golem.

One thing that is glaringly absent from both the Heinleinian and Asimovian brain is the idea of software as an immaterial, infinitely reproducible nugget at the core of the system. Here, in the second decade of the 21st century, it seems to me that the most important fact about a robot – whether it is self-aware or merely autonomous – is the operating system, configuration, and code running on it.

If you accept that robots are just machines – no different in principle from sewing machines, cars, or shotguns – and that the thing that makes them "robot" is the software that runs on a general-purpose computer that controls them, then all the legislative and regulatory and normative problems of robots start to become a subset of the problems of networks and computers.

Unstoppable computers

If you're a regular reader, you'll know that I believe two things about computers: first, that they are the most significant functional element of most modern artifacts, from cars to houses to hearing aids; and second, that we have dramatically failed to come to grips with this fact. We keep talking about whether 3D printers should be "allowed" to print guns, or whether computers should be "allowed" to make infringing copies, or whether your iPhone should be "allowed" to run software that Apple hasn't approved and put in its App Store.

Practically speaking, though, these all amount to the same question: how do we keep computers from executing certain instructions, even if the people who own those computers want to execute them? And the practical answer is, we can't.

Oh, you can make a device that goes a long way to preventing its owner from doing something bad. I have a blender with a great interlock that has thus far prevented me from absentmindedly slicing off my fingers or spraying the kitchen with a one-molecule-thick layer of milkshake. This interlock is the kind of thing that I'm very unlikely to accidentally disable, but if I decided to deliberately sabotage my blender so that it could run with the lid off, it would take me about ten minutes' work and the kind of tools we have in the kitchen junk-drawer.

This blender is a robot. It has an internal heating element that lets you use it as a slow-cooker, and there's a programmable timer for it. It's a computer in a fancy case that includes a whirling, razor-sharp blade. It's not much of a stretch to imagine the computer that controls it receiving instructions by network. Once you design a device to be controlled by a computer, you get the networked part virtually for free, in that the cheapest and most flexible commodity computers we have are designed to interface with networks and the cheapest, most powerful operating systems we have come with networking built in. For the most part, computer-controlled devices are born networked, and disabling their network capability requires a deliberate act.

My kitchen robot has the potential to do lots of harm, from hacking off my fingers to starting fires to running up massive power-bills while I'm away to creating a godawful mess. I am confident that we can do a lot to prevent this stuff: to prevent my robot from harming me through my own sloppiness, to prevent my robot from making mistakes that end up hurting me, and to prevent other people from taking over my robot and using it to hurt me.

The distinction here is between a robot that is designed to do what its owner wants – including asking "are you sure?" when its owner asks it to do something potentially stupid – and a robot that is designed to thwart its owner's wishes. The former is hard, important work and the latter is a fool's errand and dangerous to boot.

A fool's errand

It's a fool's errand for the same reason that using technology mandates to stop people from saving a Netflix stream or playing unapproved Xbox games is a fool's errand. We really only know how to make one kind of computer: the "general purpose computer" that can execute every instruction that can be expressed in symbolic logic. Put more simply: we only know how to make a computer that can run every programme. We don't know how to make a computer that can run all the programs except for a subset that, for whatever reason, good or bad, we don't want people to run.

This is not a contentious statement among computer scientists – it's about as controversial as saying "we can't make a wheel that only turns for socially beneficial purposes" or "there's no way to make a lever than can only be used to shift masses in accord with the law of the land."

And yet, we have iPhones that won't run software that Apple hasn't blessed, and Netflix apps that won't save the streams you watch for sharing or later viewing. How is this possible? It's down to a mesh of global laws that prohibit changing the software to add the prohibited functionality. These laws are not very effective, and create a lot of problems that are a lot worse than watching TV the wrong way or running an app that hasn't been approved. Most importantly, these laws make it illegal to tell people about the defects in their computers, because knowledge of those defects is key to subverting the controls on iPhones and video streams and games and such (if you know about a bug in these programs, you can exploit it to trick them into relaxing their strictures). This means that people who rely on their computers to behave securely and not leak the view from their camera, the contents of their email, or the passwords to their bank-accounts are kept in the dark.

It's possible to armour computers, the software that runs on them, and the devices that we connect to so that they generally do what they're told and don't betray us. These computers will still have defects, and there's a debate to be had about the best way of repairing those defects as they're discovered. For example, it's pretty clear that having auto-updating switched on for our computers when we turn them on will, in most instances, make them more secure, versus having computers that are only updated when we take some step to update them. But if it is impossible to switch off auto-updating, then there's also the possibility that someone who wants something that's not in your best interests – criminals, bad governments, spies, employers, voyeurs – will figure out how to take over the auto-update mechanism in order to deliberately break your technology. We've already seen this: one of the "lawful interception" tools sold to police around the world (including the world's most brutal dictatorships) infiltrates its targets' computers by pretending to be an Itunes update.

A computer that causes change

Is there such a thing as a robot? An excellent paper by Ryan Calo proposes that there is such a thing as a robot, and that, moreover, many of the thorniest, most interesting legal problems on our horizon will involve them.

As interesting as the paper was, I am unconvinced. A robot is basically a computer that causes some physical change in the world. We can and do regulate machines, from cars to drills to implanted defibrillators. But the thing that distinguishes a power-drill from a robot-drill is that the robot-drill has a driver: a computer that operates it. Regulating that computer in the way that we regulate other machines – by mandating the characteristics of their manufacture – will be no more effective at preventing undesirable robotic outcomes than the copyright mandates of the past 20 years have been effective at preventing copyright infringement (that is, not at all).

But that isn't to say that robots are unregulatable – merely that the locus of the regulation needs to be somewhere other than in controlling the instructions you are allowed to give a computer. For example, we might mandate that manufacturers subject code to a certain suite of rigorous public reviews, or that the code be able to respond correctly in a set of circumstances (in the case of a self-driving car, this would basically be a driving test for robots). Insurers might require certain practices in product design as a condition of cover. Courts might find liability for certain programming practices and not for others. Consumer groups like Which? and Consumer Union might publish advice about things that purchasers should look for when buying devices. Professional certification bodies, such as national colleges of engineering, might enshrine principles of ethical software practice into their codes of conduct, and strike off members found to be unethical according to these principles.

Can 'robot law' be separated from software law?

These are powerful regulatory tools, and they are in widespread use today. Surgical scalpels are horribly dangerous, and there are lots of rules about who is allowed to wield them and when, and what happens if you are negligent with one, or if you make one that isn't up to snuff. But we don't regulate anything that might be used as a scalpel. We don't try to keep anything that might be a scalpel out of non-medical hands. And we don't burden doctors or scalpel-makers with a mandate to ensure that they only part flesh in accord with the Hippocratic Oath.

I am skeptical that "robot law" can be effectively separated from software law in general. Self-driving cars are robots that motor down the road, and their code can kill. But traffic signals are the networking peripherals of computers send routing flags to robots (self-driving cars) and the humans who pilot less-sophisticated robots (that is, normal cars). Bad traffic signal code can kill, too. For the life of me, I can't figure out a legal principle that would apply to the robot that wouldn't be useful for the computer (and vice versa).

Which is not to say that robot law is pointless. Quite the contrary – thinking through the implications of computer law when the computers in question are directly controlling physical apparatus is likely to clarify a lot of fascinating and deep questions about computers and all the ways we regulate them: through code, through law, through markets and through norms.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed