In late November, the city’s board of supervisors gave local police the right to kill a criminal suspect using a tele-operated robot, should they believe that not acting would endanger members of the public or the police. The justification used for the so-called “killer robots plan” is that it would prevent atrocities like the 2017 Mandalay Bay shooting in Las Vegas, which killed 60 victims and injured more than 860 more, from happening in San Francisco. Yet little more than a week on, those same legislators have rolled back their decision, sending back the plans to a committee for further review. The reversal is in part thanks to the huge public outcry and lobbying that resulted from the initial approval. Concerns were raised that removing humans from key matters relating to life and death was a step too far. On December 5, a protest took place outside San Francisco City Hall, while at least one supervisor who initially approved the decision later said they regretted their choice. “Despite my own deep concerns with the policy, I voted for it after additional guardrails were added,” Gordon Mar, a supervisor in San Francisco’s Fourth District, tweeted. “I regret it. I’ve grown increasingly uncomfortable with our vote & the precedent it sets for other cities without as strong a commitment to police accountability. I do not think making state violence more remote, distanced, & less human is a step forward.” The question being posed by supervisors in San Francisco is fundamentally about the value of a life, says Jonathan Aitken, senior university teacher in robotics at the University of Sheffield in the UK. “The action to apply lethal force always has deep consideration, both in police and military operations,” he says. Those deciding whether or not to pursue an action that could take a life need important contextual information to make that judgment in a considered manner—context that can be lacking through remote operation. “Small details and elements are crucial, and the spatial separation removes these,” Aitken says. “Not because the operator may not consider them, but because they may not be contained within the data presented to the operator. This can lead to mistakes.” And mistakes, when it comes to lethal force, can literally mean the difference between life and death. Asaro also downplays the suggestion that guns on the robots could be replaced with bombs, saying that the use of bombs in a civilian context could never be justified. (Some police forces in the United States do currently use bomb-wielding robots to intervene; in 2016, Dallas Police used a bomb-carrying bot to kill a suspect in what experts called an “unprecedented” moment.) The introduction of killer robots would also actively harm police forces’ ability to interact with the community in other ways, says Asaro. “There aren’t a sufficient number of applications where these things are going to be useful,” he says. Meanwhile, other areas where robots are important—such as in passing telephones and other items in hostage negotiations—would be tarnished with the suspicion that a phone-carrying robot could in fact be a gun-toting one. But beyond the practicalities, there’s a more fundamental issue: Robots of any type, even if remotely controlled, shouldn’t be able to take human lives. For Aitken, the idea of even considering allowing robots to enact life or death decisions is flawed. “There is a clear disassociation between the action and the person making the decision,” he says. “It’s the human making the decision to take action, but the robot that would physically carry it out on the orders of a person who may—or may not—have a full appraisal of the situation.” Supervisors in San Francisco who have decided to switch allegiances around the use of lethal force for robots have been welcomed by campaigners who sought to stop the decision being made last week. “Thanks to the passionate residents of the Bay Area and the leadership of supervisors Preston, Ronen, and Walton, the Board today voted against SFPD use of deadly force with remote-controlled robots,” says Electronic Frontier Foundation policy analyst Matthew Guariglia. But it’s only a temporary reprieve: The supervisors will be reconsidering the decision at a later date, and they’re likely to find a similarly strong response, says Guariglia. “Should the Rules Committee revisit the issue, the community must come together to stop this dangerous use of technology,” he says.