This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Last week, the Dallas police killed a suspected gunman with a bomb-delivering robot. It was a desperate measure for desperate times: five law enforcement officers were killed and several more wounded before the shooter was finally cornered.
Of course, the shooter needed to be stopped; preventing further murder and mayhem is always a priority. But the method, a robot bomb, was so unorthodox that it raises many ethical and policy questions, if not also legal ones. Let’s look at some.
Direct effects on society
If deadly force was justified, does the method matter? It might. On one hand, you might think that killing is killing, but imagine if the robot had thrown acid at the suspect, or shot poisoned darts, or used a flamethrower, or stabbed him to death. Would that change things? If so, then the method matters to you, and we’re just haggling over the line.
Law enforcement in America has very little history of throwing bombs or exploding grenades at suspects. It does use nonlethal grenades, such as flash-bang, smoke, and tear-gas types, but those aren’t designed or intended to kill. So, what happened in Dallas was largely an untested (until now) use-case for explosive weapons in law enforcement.
Some ways to kill people are judged by our society to be inhumane. We don’t draw and quarter people anymore or perform executions in other cruel and unusual ways. In warfare, many weapons are prohibited under international humanitarian law, including poison, chemical weapons, glass bullets, serrated bayonets, and others. A movement is growing to preemptively ban lethal autonomous weapons systems, or “killer robots.”
While this is not yet an argument against the use of explosive weapons in law enforcement, it is very much an open question whether we should go down that road. Death by bombing is much more characteristic of bloodthirsty dictatorships, such as North Korea, not civilized society.
Another direct effect of this precedent is that it may erode what little trust exists between police negotiators and criminal suspects. The robot used in Dallas was similar to the kinds used in hostage and other dangerous situations to deliver items, such as food and mobile phones for communications; these robots help to peacefully resolve a crisis in many cases.
Now those criminals might not trust negotiators or their methods, afraid of being double-crossed by a kamikaze robot. On whether the police should employ robot suicide bombers, we need to also consider its practical effects, not just moral ones. That tactic could make desperate people even more dangerous.
Worries about ethics
Was blowing up the suspect really the only option? It’s probably too soon to say, and we may never know, since most of us were not there. Dallas police officers were attempting to negotiate with the suspect, who reportedly responded with taunts and eventually gunfire. Certainly the shooter was dangerous, but considering that the talks were ongoing for two hours in a stalemate, was serious harm so imminent that lethal force was justified?
That’s a lingering question, as well. As Ian Kerr, law and philosophy professor at University of Ottawa puts it, “I am worried about how the possibilities created by emerging technologies reframe our perceptions of what is ‘necessary’ when it comes to the projection of lethal force.”
Let’s say lethal force would usually be justified in a situation like this. The use of the robot, though, may change the moral calculus in Dallas; now, the police officers were further removed from harm’s way. And in theory, less risk to their lives means less reason to kill a suspect.
But it can go the other way, too. In warfare, the use of armed drones and other military robotics has forced the question of whether we’re more likely to choose war before we exhausted diplomatic, economic, and other peaceful options, because robots mean less risk to our side. Sending in robots mean we don’t have to send in people.
As a result, there’s a real risk of lowering the threshold for violence and more quickly escalating a conflict. The problem is that, because killing is generally regarded as bad, war ought to be the very last resort; and the same would seem to apply to killing criminal suspects who are presumed innocent until proven guilty.
Even if lethal force was still justified in this case in order to protect officers, this could also suggest a need for more robotics in law enforcement to protect those officers. Again, in theory, this should allow more suspects to be taken in alive, if we care about that at all. But there are even deeper ethical issues with replacing human police officers with machines. These are typically related to compassion, judgment, human dignity, relationships with communities, and other features that may be missing in robo-cops.
What’s the role of law enforcement?
Though it may be hard for some to tell, there’s a big difference between military operations and law enforcement that we shouldn’t forget. In war, a primary goal is to render enemy combatants incapable, which usually means to kill them. But a primary goal of police officers, besides protecting the public, is to capture suspects so that they can stand trial.
Criminal suspects—again, presumed innocent until proven guilty—are not enemy combatants, and police officers are not judge, jury, nor executioners. Sometimes, though, police officers may have to use enough force against a suspect to kill him or her, to protect innocent people in imminent danger. That’s regrettable but again is not, or should not be, an easy choice. This is a basic part of police ethics.
But with the ongoing development of nonlethal weapons—such as tasers, sonic weapons, directed energy weapons, tear gas, projectile netting, and others—there are other options besides deadly force. (These also may raise the risk of escalation and abuse, if it’s easier to incapacitate a suspect than to negotiate with him for hours.) Safe from the danger, we can now reasonably wonder whether there were other options in Dallas, though this is not intended to second-guess decisions made during the tense and difficult standoff.
Back to the different roles of warfighters and peace officers, the confusion is that the police are becoming more militarized, now armed with tanks, and other surplus weapons and equipment from the military. Many police officers are military veterans and reserves. The robot used by Dallas police is the same kind first used by military to defuse bombs; now it has become a weapon, not a defense against one.
Militarization of the police isn’t just a perception of some people, but it seems to be a real phenomenon. Last year, President Obama explicitly addressed the issue:
“We’ve seen how militarized gear can sometimes give people a feeling like there’s an occupying force, as opposed to a force that’s part of the community that’s protecting them and serving them. It can alienate and intimidate local residents, and send the wrong message. So we’re going to prohibit some equipment made for the battlefield that is not appropriate for local police departments.”
The Dallas police robo-bomb, along with the events that precipitated it, should push us to do some soul-searching about the role of law enforcement in society. As one police chief who has been thinking about the relationship between law enforcement and its communities, Lee Sjolander, reminded peace officers that “it’s not ‘us-versus-them’.”
In the end, the decision by Dallas police to use a robot-bomb was either pre-planned, or it wasn’t. Either they had considered the tactic previously and had some plan in place, or it was an improvised tactic. Either way, the policy is unclear and unpublished, which means we don’t know how thoughtful it is. We don’t know the issues they considered and how they arrived at the decision.
As with other game-changing technologies, police robotics—especially as weapons—could change the character of law enforcement in society. It surely would save the lives of police officers, but there are other issues at stake, too. This needs to be a broader conversation with the democratic public that it affects and that gives police officers their authority in the first place.
Dr. Patrick Lin is the director of the Ethics + Emerging Sciences Group and a philosophy professor at California Polytechnic State University, San Luis Obispo. He has current and previous appointments at Stanford Law School’s Center for Internet and Society, Stanford School of Engineering, U.S. Naval Academy, and Dartmouth College. Dr. Lin also consults for leading industry, governmental, and academic organizations on ethics and policy related to emerging technologies.