top of page

The World's Leading A.I. Experts Want to Ban Killer Robots and Prevent a Third Age of War, but F

We built him because it was "impractical" not to.

In an urgently-worded letter to the UN, a group of the A.I. industry's top experts, including 116 industry leaders--among them SpaceX/Tesla Founder Elon Musk--and dozens of organizations in nearly 30 countries, including China, Israel, Russia, Britain, South Korea and France; have called for a ban on autonomous weapons; often euphemistically referred to as "killer robots," asking that they be added to the list of weapons banned under the UN’s convention on Certain Conventional Weapons--a Convention whose purpose is to restrict weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.”

In the letter, signatories implored U.N. leaders to work hard to prevent an autonomous weapons “arms race” and “avoid the destabilizing effects” of the emerging technology, saying such an arms race could usher in the “third revolution in warfare” after gunpowder and nuclear arms.

“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close," they said.

Elon Musk even went so far as to call A.I. humankind's "biggest existential threat. Is he mistaken?

This is a technology that will create war on a greater scale, that means more parts of the globe engulfed by its flames, at "timescales faster than humans can comprehend" where robots carry out the death sentences of millions of innocent people. That means whole groups of people eliminated without any human cost to the attackers. It's certainly something worth thinking very seriously about, "impractical" or not.

In an age when mankind still ravages the planet with illegal wars of choice using ever "smarter" weapons to facilitate murder on nearly genocidal scales, you'd think the tech community would be pretty much unanimous in supporting a ban, but this is--sadly--not the case. Many simply don't grasp the seriousness of the threat, while others have far, far too much faith in humankind's ability to use such weapons in a strictly just and moral fashion.

Among the dissenters: Wired's Tom Simonite and Facebook's Mark Zuckerberg. Naively, Zuckerberg sees only the promise of A.I. while failing to perceive its moral hazard, and conflates the ban on killer robots with a ban on all A.I.

Simonite on the other hand, seems utterly impervious to the idea that these weapons will be used to expedite evil rather than good, and implicitly accepts war as a permanent feature of human society. What both Simonite and Zuckerberg fail to recognize is that these technologies will facilitate murder and unjust wars on a scale that is unprecedented in the human experience, something Bill Joy described as the "further perfection of extreme evil" in his article for Wired magazine in 2004.

That article, Why the Future Doesn't Need US, is a profound and spine-chilling treatise on the frightening power A.I. will unleash and should be required reading for anyone looking to truly understand the kind of threats we face. We highly, highly recommend Simonite and Zuckerberg read it.

In it, Joy describes two dystopian scenarios: one in which AI and machines take over, and another in which humans retain control of the machines.

Scenario 1 is basically the premise of the film "I, Robot". In it, robots basically keep humans as pets.

In the second scenario "... the average man may have control over certain private machines of his own, such as his car or his personal computer, (but) control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity." (emphasis added)

"The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them."

"Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals." (emphasis added)

This is why it was so troubling to read Tom Simonite's article Sorry, Banning Killer Robots Just Isn't Practical." In it, Simonite adopts the intellectually lazy stance that a ban will be too logistically difficult to enforce, and the morally lazy position that we should accept war and conflict as a permanent feature of human civilization. To the rational human, these positions are unthinkable. Bluntly put, that moral stance is the road to hell. It would likely condemn humanity to self-destruction within our lifetimes. Simonite's moral compromises simply aren't survivable compromises.

To the first point, that a ban is logistically "impractical," Simonite cites a report on artificial intelligence and war commissioned by the Office of the Director of National Intelligence. The report concludes that A.I. is set to massively magnify military power just as Musk and Joy describe. The argument is then made that banning AI won't work because the US and other countries won't be able to stop themselves from building arsenals of weapons that can decide when to fire because “The temptation for using them is going to be very intense.”

So we should abandon the idea of a ban because our governments, which routinely launch illegal wars of aggression and are currently engaged in the "most extreme terrorist campaign of modern times," will be tempted to use them? That's the argument? Really?

If someone said "don't ban that new class of weapon, murderous criminals will be way too tempted to use them. Let's approve them and ineffectively regulate them instead," you'd probably call them insane. Incredibly, that's exactly the logic we're told should apply here.

Tom also goes on to cite Rebecca Crootof, a researcher at Yale Law School, who says that rather than use time and energy negotiating a ban “That time and energy would be much better spent developing regulations,”

"International laws such as the Geneva Convention that restrict the activities of human soldiers [could be adapted] to govern what robot soldiers can do on the battlefield, for example."

There are numerous problems with this argument, the first of which is that the US flouts the Geneva Conventions and even the 800-year old Magna Carta every time it assassinates someone with a drone. The Geneva Conventions have not stopped the US--apparently the world's most principled and just warrior--from torturing people on massive scales, from using banned weapons on densely populated civilian areas indiscriminately, from launching illegal wars of aggression that cause millions to die, from launching dozens of illegal regime change operations in countries around the world, or from sending American weapons to aid terrorist proxy forces.

The second problem is logistical. If you ban war crimes, then any leader responsible for such war crimes can be brought up on charges, take Augusto Pinochet for example. It is easy to identify who is responsible and through swift punishment dissuade others from engaging in war crimes in the future. This is why things like pre-emptive war and ethnic cleansing are currently totally banned. Regulation implicitly legitimizes, that's why these war crimes are banned, not regulated. So if you simply regulate what killer robots can do on the battlefield then you do two things:

1) Implicitly accept the right of an unthinking machine to commit murder.

2) Massively increase the complexity of applicable law, diminish accountability, and create situations which are difficult to resolve.

Who would be accountable for an autonomous robot's actions? Would a successful prosecution end an illegal invasion? Would it mean withdrawing the malfunctioning unit, simply repairing it and redeploying, or destroying it? Which courts would determine culpability and damages? Would victims have real access to such a court?

If anything, regulations would be far more ineffective at protecting innocent life and preventing criminal aggression than a total ban.

Another argument in favor of AI weapons is that smart weapons are more precise, meaning that “...commanders can use precision-guided weapon systems with homing functions to reduce the risk of civilian casualties,” a requirement according to the Pentagon Law of War Manual.

The argument looks good on its face, but it starts to fall apart when you try to reconcile reducing civilian casualties with the fact that these "smart" strikes are occurring in the context of illegal wars, and being used to assassinate people in civilian areas. Though it sounds like the US is following the Geneva Conventions, it most definitely isn't.

Protocol 1 of the Geneva Conventions, to which the US is not a signatory, prohibits the deliberate or indiscriminate attack of civilians and civilian objects in the war-zone and the obliges the attacking force to precautions and steps to spare the lives of civilians and civilian objects as much possible. This basically outlaws all US drone strikes on the basis that they often target civilians structures and people.

The Fourth Convention, to which the US is a signatory, protects people from assassination and execution without a fair trial. It prohibits:

"the passing of sentences and the carrying out of executions without previous judgment pronounced by a regularly constituted court, affording all the judicial guarantees which are recognized as indispensable by civilized peoples."

This is the Convention that outlaws any drone strike conducted for the purpose of assassination, which the US does routinely in civilian areas with a terrible cost in innocent human life, not to mention all the terrorist recruitment such strikes incentivize.

More importantly than whether the US is abiding by the Geneva Conventions is whether drones have actually reduced civilian casualties.

The tragic answer: not in the least.

Despite the rise of "smart" weapons, Unicef reports that the proportion of civilian casualties has not diminished since 1900, but actually increased.

"Civilian fatalities in wartime climbed from 5 per cent at the turn of the 20th century, to 15 per cent during World War I, to 65 per cent by the end of World War II, to more than 90 per cent in the wars of the 1990s."

By 2015, the so-called "War on Terror" was estimated to have killed a minimum of 1.3 million, though the number, calculated by the Physicians for Social Responsibility, could well be as high as 2 million.

This now perpetual war has already cost U.S. taxpayers at least 5 trillion USD. The result? Terrorism has increased 6500%.

It seems counterintuitive at first that smarter weapons ostensibly made to reduce casualties are increasing body counts, but it all makes sense once you realize that it's precisely the weapon's accuracy that provides the excuse to use it more frequently.

The smart weapons and drones we rely on today incentivize our criminal governments to wage increasing numbers of illegal wars and attacks because they reduce or eliminate scenes of American soldiers coming back in body bags. As less soldiers die, the political and diplomatic costs of using violence go down. As the "cost" of violence goes down, its application increases. Thus could killer robots be considered weapons of mass impunity. Justified by their "accuracy" and "intelligence" they will be deployed more broadly, more pervasively, throughout the world.

These are not weapons we can simply hope our governments will use justly and morally. Rather, this is a frighteningly powerful technology we have to keep out of our governments' hands at all costs. They won't be saving lives and defending democracy. They'll be in poor countries killing poor people to expand geopolitical power just like our "smart" bombs and drones are doing today.

Humanity must choose if we will allow automatons to do violence against us. If killer robots were to become our police, citizens would become little more than prisoners in their own countries - a robot police force could become the force to impose dictatorship at the flip of a switch. Armies could be hacked and controlled by single actors with destructive aims.

Killer robots deployed to war zones could easily have impacts as grave as that of any WMD we have today. Given enough time, and deployed over a broad area, killer robots that could replenish their power and ammunition themselves could simply go on killing indefinitely.

As weapons with the destructive potential of WMDs, the reasons to ban killer robots become similar to the reasons for banning nuclear weapons. Let's list a few:

1. Nuclear weapons are inhumane. They kill enormous numbers of people indiscriminately.

--So do smart weapons and drones and so long as governments are waging war and murdering people illegally and immorally on massive scales we can only expect that to continue.

2. So long as any state has nuclear weapons others will want them.

--We already see multiple countries including the US, UK, France, Russia, Israel, Iran, China racing to develop and acquire robotic and autonomous weapons. So long as wars are perpetuated, wasteful arms races will continue.

3. So long as any state retains nuclear weapons, they are bound one day to be used – if not by deliberate use, then certainly by accident or misjudgement.

--The problem with killer robots is not that they might be used by accident or misjudgement, but that they are already used deliberately to wage illegal wars and acts of terror that result in the deaths of millions of innocents. The lower the political and economic cost of using these weapons, the more people will die.

4. Nuclear deterrence is at worst of zero utility, and at best of highly dubious utility, in maintaining stable peace.

--As the signatories have indicated, killer robots can only be a tremendously destabilizing technology. Not only in that countries must now all expend resources to defend themselves against this threat, but in that these weapons could be used by anyone from the CIA, to private military organizations to organized crime for any number of reasons.

5. Because the resources freed up by discontinuing the nuclear arms race could be used for other global challenges.

--In a world that needs its scarce resources to survive climate change, overpopulation, resource scarcity and other problems of its own making, indulging in pointless, costly arms races could prove lethal to the species. Not only do we need to ban killer robots, we need to end all illegal wars and start following the Geneva Conventions and the Laws of War. We humans should be free of the scourge of war by now. It's a self-destructive activity humanity can no longer afford.

Wars don't bring peace. Peace bring peace.

The trillions of dollars being spent on war and weapons development would be far better deployed ensuring that humans all enjoy a basic right to safety, food, shelter and education; and on retrofitting the global economy to become sustainable and avoid the most catastrophic effects of climate change.

You can't eliminate terrorism by engaging in it. You can't free people by bombing them. The only way the world becomes safer for us is if it becomes safer for all.

That's why it is so troubling to see articles like Simonite's. What a massive moral failure and lost opportunity when educated commentators, rather than taking courageous principled stances, tacitly approve unending immoral war and weapons of unprecedented evil. They advocate wasting money on futile, self-destructive arms races rather than initiatives that could actually improve humanity's chances of surviving the next 100 years.

Let none of us be deluded that killer robots will be used to uphold justice or promote democracy. They will be used criminally, immorally and terribly as weapons of mass-murder and destruction to maintain and expand geopolitical and class power. Whether intentionally or by accident, killing on massive scales will only get easier and easier. Therefore we, as a species, must make a conscious choice to adopt morals and laws that do the opposite: eliminate the scourge of war and all temptation to solve problems through violence.

Let us also never believe that a ban is somehow "impractical" or impossible. It is our god-given, supreme responsibility as sentient, moral beings to ensure there is a ban and to ensure, through popular activism and political involvement that the bans on killer robots and unjust war and assassination are upheld and their violations punished to the fullest extent of the law.

On July 7th, 2017 the UN General Assembly voted to adopt a categorical ban on nuclear weapons, proving that the idea of eliminating our most terrifying weapons is not far fetched or "impractical", but, in fact, one of humanity's most cherished collective hopes.

#NoKillerRobots #IndictUSWarCriminals #NoRegimeChange #DivestFromWar

bottom of page