Criminal law

Killer robots are approved to fight crime. What are the legal and ethical concerns?

[ad_1]

Last week, the San Francisco Board of Supervisors voted to allow its police department to use robots to kill suspected criminals. The decision was met with both praise and skepticism.

The decision came six years after Dallas police equipped a robot with explosives to end a standoff with a sniper who killed five officers. The Dallas incident is believed to be the first time a person has been intentionally killed in the United States by a robot. But judging by the vote in San Francisco, it may not be the last.

What are the legal concerns when governments turn to machines to end human life? UVA today asked Prof Ashley Dix, who studied this intersection to weigh in. dix Recently returned to college From serving as Assistant Counsel to the White House and Deputy Legal Adviser to the National Security Council.

Ashley Dix

Ashley Dix is ​​Research Professor of Law, Class of 1948.

First, what is your reaction to San Francisco’s decision?

I’m surprised to see this coming from San Francisco, which is generally a very liberal jurisdiction, and not from a city known as “tough on crime!” But it’s also important to understand what capabilities San Francisco robots have and what they don’t. These are not autonomous regimes that can independently choose who to use force against. Police officers will operate it, even if remotely and from a certain distance. So calling them “killer robots” can be a bit misleading.

When a police officer uses lethal force, he or she is ultimately responsible for the decision. What complications might arise, legally, if a robot did the actual killing?

According to the Washington Post, the San Francisco Police Department does not plan to equip its robots with firearms in the future. Instead, the policy seems to envision a situation where the police could supply the robot with something like an explosive, a stun gun, or a smoke grenade. San Francisco policy would still include a human being “in the loop,” in which the human would remotely pilot the robot, control where it goes, and decide if and when the robot should ignite explosives or incapacitate a suspect. So defining the relationship between the human decision-maker and the use of lethal force would still be easy to determine.

It could get even more complicated if the bot stops working as intended and accidentally harms someone through no fault of the operator. If the victim or her family files a lawsuit, there may be issues about whether to hold the manufacturer, the police department, or both responsible. But this is not much different from what happens when a police officer accidentally shoots and injures a person due to a manufacturing defect.

Aside from the legal questions, what ethical questions will society have to grapple with when robots take life away? Or are legal and ethical issues intertwined?

Legal and ethical issues are associated. Ideally, the legal rules enacted by states and localities will reflect careful thinking about morals, as well as the Constitution, federal and state laws, and smart policy choices. On one side of the scale are the significant benefits that come from tools that help protect police officers and innocent citizens from harm. Since many uses of lethal force occur because officers fear for their lives, properly organized and carefully used bots can reduce the use of lethal force because they can reduce the number of situations in which officers are put in danger.

On the flip side, there are concerns about making police departments more willing to use force, even when it is not a last resort. about accidents that can arise if robotic systems are not carefully tested or users are not well trained; And about whether using bots in this way somehow opens the door to the future use of systems with more autonomy in law enforcement decision-making.

One of the new questions that could arise is whether police departments should institute more cautious policies for the use of force when it comes to a robot handling that force, because a robot itself cannot be killed or harmed by a suspect. In other words, we may not want to allow robots to use force to defend themselves.

I have studied how the police often use artificial intelligence as a crime-fighting tool. Some governments may develop autonomous weapon systems that can choose targets themselves in armed conflict. Do you see a time when police start considering AI bots to make deadly decisions?

A large part of what armies do during war is locate and kill enemy forces and destroy enemy military equipment. AI tools are well suited to helping militaries make predictions about where certain targets will be located and which strikes will help win the war. There is a heated debate about whether states should deploy autonomous killer systems that can decide for themselves who or what to target, but again, the idea is that these systems will be deployed in time of war, not peacetime.

All of this is really different from what the police do. Police officers can only use force to control a situation where there is no reasonable alternative. An officer may constitutionally use deadly force only against a person who may evade arrest for having committed a serious crime or who poses a threat of serious bodily injury or death to the officer or a third person. It’s really hard to imagine that police departments in the United States would use, or could legally use, autonomous robots that can make independent decisions, based on their algorithms, about when to use force.

Do you consider San Francisco’s decision an anomaly, or do you expect other cities and police departments to explore this in the future?

Notably, San Francisco’s policy still needs to pass a second vote and be approved by the city’s mayor, so no deal has been reached yet. Regarding previous examples, as you may have noticed, the Dallas Police Department did something similar in 2016, when it used a robot with an extendable arm to place a pound of explosives near a shooter who had holed up after killing five officers and wounding seven others. . The Dallas police then detonated a C4 bomb, killing the shooter.

Many police departments around the country have EOD robots, which they got as military surplus equipment from the Pentagon. I wouldn’t be surprised if other states and localities decide that now is the time to clarify their policies on whether or how these bots will be used in ways that could kill or injure suspects. Cities may decide whether to adopt policies like San Francisco’s, or they may conclude that they want to ban these uses of robots.

It will be important to have a robust discussion about the details of these policies. In what specific circumstances can robots be used to deliver force? How far should the administrators approving the use be in a given situation? How confident are operators in the reliability of the systems? In addition, it will be important for a wide range of people to have their say—not just police departments and civil liberties advocates, but also attorneys, ethicists, and citizens.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button