A United Nations report about “killer robots” is a new spin on the rising concern about drones—and the legal problems caused by self-guided machines could be closer than you think.
The U.N. Human Rights Commission plans to address part of the issue later this month in Geneva. Christof Heyns, a South African professor of human rights law, released an extensive U.N. report on the topic in April that has ominous overtones.
Specifically, Heyns warned about “lethal autonomous robotics” and posed a very serious question.
“Their deployment may be unacceptable because no adequate system of legal accountability can be devised, and because robots should not have the power of life and death over human beings,” Heyns said.
Thankfully, he said, there’s no immediate threat from a Terminator or RoboCop gone rogue.
“Robots with full lethal autonomy have not yet been deployed,” Heyns said. But he qualified that statement with a list of automated systems that have lethal power.
On the list is the Samsung Techwin SGR-1 robot (SGR stands for “security guard robot”), which has the ability to make many nonlethal decisions on its own about targets.
The SGR-1 can follow targets visually from two miles away in daytime and from one mile away at night using thermal imaging. A human working with the SGR-1 can talk to a target using a microphone and speaker. The robot also has a machine gun and grenade launcher on board (those features are optional), but the human makes the decision about using lethal force. The robot is stationary and can be mounted on top of poles or buildings. The basic model costs about $200,000.
The SGR-1 is part of system that Samsung will offer to airports, border crossings, prisons, nuclear plants, and even military bases, according to its patent information.
The SGR-1 has already been deployed by South Korea to monitor its demilitarized zone with North Korea. The U.S. military uses the Talon, a mobile robot that handles explosives and can be used for surveillance. Domestic versions, which have night-vision abilities like the SGR-1, are used for surveillance and bomb disposal.
Like many military technologies, these robots are also making their way into the civilian world. FEMA’s website lists government-approved robots including the SNEAKY, a small surveillance robot that literally sneaks around gathering evidence. SNEAKY can do border inspections, gather audio and video evidence, sniff bags, and issue voice instructions.
These new robots are essentially ground-based versions of drones—and much like with drones, robot owners will likely face privacy tests in court as they’re adopted for civilian use.
For example, the Supreme Court recently ruled that police who used a trained dog to sniff marijuana inside a home had violated a person’s Fourth Amendment rights. Another Supreme Court decision said the use of a pot-sniffing dog was acceptable during a police traffic stop.
Back in 2001, the court ruled in Kryllo v. United States that the use of thermal imaging technology by police to detect a marijuana-growing operation inside a home without obtaining a warrant was unconstitutional.
The technology exists for robots to electronically sniff out marijuana using an “electronic nose” and examine objects using thermal imaging. Robots could also be equipped with listening devices, which would raise some interesting wiretapping issues, depending on the presence of warrants.
And what happens if a commercial version of a military robot uses lethal force on its own?
That seems unlikely, but the U.S. Department of Defense stressed in a planning document from 2011 “the need to transition to a more autonomous modern system of warfare.”
Recent Constitution Daily Stories