In combating killer robots, researchers point with optimism to a ban on another technology that was rather successful: the prohibition on the use of biological weapons. That ban was enacted in 1972, amid advances in bioweaponry research and growing awareness of the risks of biowarfare.
Several factors made the ban on biological weapons largely successful. First, state actors didn't have that much to gain by using the tools. Much of the case for biological weapons was that they were unusually cheap weapons of mass destruction “ and access to cheap weapons of mass destruction is mostly bad for states.
Opponents of LAWS have tried to make the case that killer robots are similar. “My view is that it doesn't matter what my fundamental moral position is, because that's not going to convince a government of anything,“ Russell told me. Instead, he has focused on the case that “we struggled for 70-odd years to contain nuclear weapons and prevent them from falling in the wrong hands. In large quantities, [LAWS] would be as lethal, much cheaper, much easier to proliferate “ and that's not in our national security interests.
But the UN has been slow to agree even to a debate over a lethal autonomous weapons treaty. There are two major factors at play: First, the UN's process for international treaties is generally a slow and deliberative one, while rapid technological changes are altering the strategic situation with regard to lethal autonomous weapons faster than that process is set up to handle. Second, and probably more importantly, the treaty has some strong opposition.
The US (along with Israel, South Korea, the United Kingdom, and Australia) has thus far opposed efforts to secure a UN treaty opposing lethal autonomous weapons. The US's stated reason is that since in some cases there could be humanitarian benefits to LAWS, a ban now before those benefits have been explored would be “premature.“ (Current Defense Department policy is that there will be appropriate human oversight of AI systems.)
Opponents nonetheless argue that it's better for a treaty to be put in place as soon as possible. “It's going to be virtually impossible to keep [LAWS] to narrow use cases in the military,“ Javorsky argues. “That's going to spread to use by non-state actors.“ And often it's easier to ban things before anyone has them already and wants to keep the tools they're already using. So advocates have worked for the past several years to bring up LAWS for debate in the UN, where the details of a treaty can be hammered out.
There's a lot to hammer out. What exactly makes a system autonomous? If South Korea deploys, on the border of the Demilitarized Zone with North Korea, gun systems that automatically shoot unauthorized persons, that's a lethal autonomous weapon “ but it's also a lot like a land mine. “Arguably, it can be a bit better at discriminating than a minefield can, so maybe it even has advantages,“ Russell said.
Or take “loitering munitions,“ an existing technology. Fired into the air, Scharre writes, they circle over a wide area until they home in on the radar systems they want to destroy. No human is involved in the final decision to dive in and attack. These are autonomous weapons, though they target radar systems, not humans.
These and other issues would have to be settled for a useful UN ban on autonomous weapons. And with the US opposed, an international treaty against lethal autonomous weapons is unlikely to succeed.
There's another form of advocacy that might impede military uses of AI: the reluctance of AI researchers to work on such uses. Leading AI researchers in the US are largely in Silicon Valley, not working for the US military, and partnerships between Silicon Valley and the military have so far been fraught. When it was revealed that Google was working with the Department of Defense on drones through Project Maven, Google employees revolted, and the project was not renewed. Microsoft employees have similarly objected to military uses of their work.
It's possible that tech workers can delay the day when a treaty is needed, or create pressure to make such a treaty happen, simply by declining to write the software that will power our killer robots “ and there are signs that they're inclined to do so.
How scared should we be?
Killer robots have the potential to do a lot of harm, and make the means of killing lots of people more available to totalitarian states and to non-state actors. That's pretty scary.
But in many ways, the situation with lethal autonomous weapons is just one manifestation of a much larger trend.
AI is making things possible that were never possible before, and doing so quickly, such that our capabilities frequently get out ahead of thought, reflection, and strong public policy. As AI systems become more powerful, this dynamic will become more and more destabilizing.
Whether it's killer robots or fake news, algorithms used to shoot suspected combatants or trained to make parole decisions about prisoners, we're handing over more and more critical aspects of society to systems that aren't fully understood and that are optimizing for goals that might not quite reflect our own.
Advanced AI systems aren't here yet. But they get closer every day, and it's time to make sure we'll be ready for them. The best time to come up with sound policy and international agreements is before these science fiction scenarios become reality.