Don’t Let Police Arm Autonomous or Remote-Controlled Robots and Drones

Share:

Don’t Let Police Arm Autonomous or Remote-Controlled Robots and Drones

It’s no longer science fiction or unreasonable paranoia. Now, it needs to be said: No, police must not be arming land-based robots or aerial drones. Thats true whether these mobile devices are remote controlled by a person or autonomously controlled by artificial intelligence, and whether the weapons are maximally lethal (like bullets) or less lethal (like tear gas).

Police currently deploy many different kinds of moving and task-performing technologies. These include flying drones, remote control bomb-defusing robots, and autonomous patrol robots. While these different devices serve different functions and operate differently, none of them–absolutely none of them–should be armed with any kind of weapon. 

Mission creep is very real. Time and time again, technologies given to police to use only in the most extreme circumstances make their way onto streets during protests or to respond to petty crime. For example, cell site simulators (often called “Stingrays”) were developed for use in foreign battlefields, brought home in the name of fighting “terrorism,” then used by law enforcement to catch immigrants and a man who stole $57 worth of food. Likewise, police have targeted BLM protesters with face surveillance and Amazon Ring doorbell cameras.

Today, scientists are developing an AI-enhanced autonomous drone, designed to find people during natural disasters by locating their screams. How long until police use this technology to find protesters shouting chants? What if these autonomous drones were armed? We need a clear red line now: no armed police drones, period.

The Threat is Real

There are already law enforcement robots and drones of all shapes, sizes, and levels of autonomy patrolling the United States as we speak. From autonomous Knightscope robots prowling for “suspicious behavior” and collecting images of license plates and phone identifying information, to Boston Dynamic robotic dogs accompanying police on calls in New York or checking the temperature of unhoused people in Honolulu, to predator surveillance drones flying over BLM protests in Minneapolis.

We are moving quickly towards arming such robots and letting autonomous artificial intelligence determine whether or not to pull the trigger.

According to a Wired report earlier this year, the U.S. Defense Advanced Research Projects Agency (DARPA) in 2020 hosted a test of autonomous robots to see how quickly they could react in a combat simulation and how much human guidance they would need. News of this test comes only weeks after the federal government’s National Security Commission on Artificial Intelligence recommended the United States not sign international agreements banning autonomous weapons. “It is neither feasible nor currently in the interests of the United States,” asserts the report, “to pursue a global prohibition of AI-enabled and autonomous weapon systems.”

In 2020, the Turkish military deployed Kargu, a fully autonomous armed drone, to hunt down and attack Libyan battlefield adversaries. Autonomous armed drones have also been deployed (though not necessarily used to attack people) by the Turkish military in Syria, and by the Azerbaijani military in Armenia. While we have yet to see autonomous armed robots or drones deployed in a domestic law enforcement context, wartime tools used abroad often find their way home.

The U.S. government has become increasingly reliant on armed drones abroad. Many police departments seem to purchase every expensive new toy that hits the market. The Dallas police have already killed someone by strapping a bomb to a remote-controlled bomb-disarming robot. 

So activists, politicians, and technologists need to step in now, before it is too late. We cannot allow a time lag between the development of this technology and the creation of policies to let police buy, deploy, or use armed robots. Rather, we must ban police from arming robots, whether in the air or on the ground, whether automated or remotely-controlled, whether lethal or less lethal, and in any other yet unimagined configuration.

No Autonomous Armed Police Robots

Whether they’re armed with a taser, a gun, or pepper spray, autonomous robots would make split-second decisions about taking a life, or inflicting serious injury, based on a set of computer programs.

But police technologies malfunction all the time. For example, false positives are frequently generated by face recognition technology, audio gunshot detection, and automatic license plate readers. When this happens, the technology deploys armed police to a situation where they may not be needed, often leading to wrongful arrests and excessive force, especially against people of color erroneously identified as criminal suspects. If the malfunctioning police technology were armed and autonomous, that would create a far more dangerous situation for innocent civilians.

When, inevitably, a robot unjustifiably injures or kills someone–who would be held responsible? Holding police accountable for wrongfully killing civilians is already hard enough. In the case of a bad automated decision, who gets held responsible? The person who wrote the algorithm? The police department that deployed the robot?

Autonomous armed police robots might become one more way for police to skirt or redirect the blame for wrongdoing and avoid making any actual changes to how police function. Debate might bog down in whether to tweak the artificial intelligence guiding a killer robot’s decision making. Further, technology deployed by police is usually created and maintained by private corporations. A transparent investigation into a wrongful killing by an autonomous machine might be blocked by assertions of the company’s supposed need for trade secrecy in its proprietary technology, or by finger-pointing between police and the company. Meanwhile, nothing would be done to make people on the streets any safer.

MIT Professor and cofounder of the Future of Life Institute Max Tegmark told Wired that AI weapons should be “stigmatized and banned like biological weapons.” We agree.  Although its mission is much more expensive than the concerns of this blog post,  you can learn more about what activists have been doing around this issue by visiting the Campaign to Stop Killer Robots.

No Remote-Controlled Armed Police Robots, Either

Even where police have remote control over armed drones and robots, the grave dangers to human rights are far too great. Police routinely over-deploy powerful new technologies in already over-policed Black, Latinx, and immigrant communities.  Police also use them too often as part of the United State’s immigration enforcement regime, and to monitor protests and other First Amendment-protected activities. We can expect more of the same with any armed robots.

Moreover, armed police robots would probably increase the frequency of excessive force against suspects and bystanders. A police officer on the scene generally will have better information about unfolding dangers and opportunities to de-escalate, compared to an officer miles away looking at a laptop screen. Moreover, a remote officer might have less empathy for the human target of mechanical violence.

Further, hackers will inevitably try to commandeer armed police robots. They already have succeeded at taking control of police surveillance cameras. The last thing we need are foreign governments or organized criminals seizing command of armed police robots and aiming them at innocent people.

Armed police robots are especially menacing at protests. The capabilities of police to conduct crowd control by force are already too great. Just look at how the New York City Police Department has had to pay out hundreds of thousands of dollars to settle a civil lawsuit concerning police using a Long Range Acoustic Device (LRAD) punitively against protestors. Police must never deploy taser-equipped robots or pepper spray spewing drones against a crowd. Armed robots would discourage people from attending protests. We must de-militarize our police, not further militarize them.

We need a flat-out ban on armed police robots, even if their use might at first appear reasonable in uncommon circumstances. In Dallas in 2016, police strapped a bomb to an explosive-diffusing robot in order to kill a gunman hiding inside a parking garage who had already killed five police officers and shot seven others. Normalizing armed police robots poses too great a threat to the public to allow their use even in extenuating circumstances. Police have proven time and time again that technologies meant only for the most extreme circumstances inevitably become commonplace, even at protests.

Conclusion

Whether controlled by an artificial intelligence or a remote human operator, armed police robots and drones pose an unacceptable threat to civilians. It’s exponentially harder to remove a technology from the hands of police than prevent it from being purchased and deployed in the first place. That’s why now is the time to push for legislation to ban police deployment of these technologies. The ongoing revolution in the field of robotics requires us to act now to prevent a new era of police violence.

Published July 16, 2021 at 06:46PM
Read more on eff.org

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x