News

Are Killer Robots Coming? – Intercessors for America

Analysis. Last week the Pentagon announced clarifications to its Autonomy in Weapon Systems directive. That’s a timely update, because news reports suggest the battlefield use of self-directed “killer robots” could be mere months away.

Connect with others in your state in prayer.

Killer robots — sometimes called slaughterbots or terminators — sound like science fiction. But advances in artificial intelligence have made the possibility of lethal autonomous weapons (LAWs) very real. Unlike more-traditional drone weapons that are controlled remotely by a person out of harm’s way, LAWs are machines that can be activated and then set free to select and kill targets on their own.

Numerous nations — including the U.S. — have been investing in artificial-intelligence-supported devices with such autonomous capabilities, but so far no country has turned them loose to terminate enemy soldiers at will (at least, no such kills have been definitively confirmed). There is still a line not yet crossed of intentionally allowing a robot to choose independently by its own algorithms to kill a human being. However, observers worry that the ongoing Russia-Ukraine war could take us past that threshold.

In an Associated Press article titled Drone Advances in Ukraine Could Bring Dawn Of Killer Robots, Mykhailo Fedorov, a top Ukrainian technology official, confirms that his nation is working on these weapons and that they are “a logical and inevitable next step.” More specifically, Fedorov adds: “I think that the potential for this is great in the next six months.”

Observers also think Russia is pursuing such weapons, and given its track record of targeting civilians, many believe that Russia would not be opposed to using them.

“I don’t think they’d have any scruples,” one drone manufacturer told the AP.

Indeed, Russian President Vladimir Putin has expressed the importance of AI-powered weapons, and he noted in 2017 that “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

Autonomous weapon systems are attractive to governments in part because they put machines rather than their soldiers in harm’s way. Another draw is that such weapons with embedded AI can react far more swiftly than humans to changing battlefield dynamics and can — theoretically — more accurately eliminate opposing forces.

But there are ethical and safety concerns. In fact, the Campaign to Stop Killer Robots lists nine main problems with LAWs, among which is “digital dehumanization.”

“Technology should empower all members of society, not reduce us — to stereotypes, labels, objects,” the advocacy group declares on its website. “Used against people, the technologies that enable autonomous weapons will automatically profile, pattern match and process human beings as data. The truth is, machines cannot recognize people as ‘people’. So machines deciding whether or not to subject us to attack is the ultimate form of digital dehumanization.”

Other concerns center around the loss of human control, judgment, and accountability for machine-determined kills. What happens, for example, if a machine mistakes a civilian convoy for an enemy detachment? Or what about a surrendering unit?

“Machines don’t understand context or consequences: understanding is a human capability — and without that understanding, we lose moral responsibility and we undermine existing legal rules,” the Campaign to Stop Killer Robots states.

Similarly, a Human Rights Watch report says: “[A]lthough fully autonomous weapons would not be swayed by fear or anger, they would lack compassion, a key safeguard against the killing of civilians.”

And then there is the worry that self-directed weapons could actually make arms races, wars, terrorism, and oppression aimed at certain groups of people more likely.

“Once developed, fully autonomous weapons would likely proliferate to irresponsible states or non-state armed groups, giving them machines that could be programmed to indiscriminately kill their own civilians or enemy populations,” says Human Rights Watch.

The Campaign to Stop Killer Robots, Human Rights Watch, and other activists want a ban on killer robots. So far, however, such efforts at the United Nations have been thwarted by countries that either want to advance the technology or else want to be sure they can keep up with and counter their adversaries that cannot be trusted with such weapons. But 70 nations, including the U.S., did declare last fall that common regulations would be helpful in this emerging sphere.

And this brings us back to the Pentagon action of last week. Our military officials want this country to remain a global leader on the responsible use of autonomous weapons and AI. They also want to “minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”

According to some observers, one key phrase in the directive is the Pentagon’s commitment to “appropriate levels of human judgment over the use of force.” An expert at the Center for Strategic and International Studies told a Forbes journalist that the term “reflects the fact that in some cases — autonomous surveillance aircraft and some kinds of autonomous cyber weapons, for example — the appropriate level of human control may be little to none.”

Killer robots seem more and more inevitable.

Will you pray for wisdom for our leaders and others around the world as these new weapons systems change the nature and increase the deadly risks of armed conflict — not to mention the relationship between us as humans and our technology?

Aaron Mercer is a contributing writer with two decades of experience in the Washington, D.C., public-policy arena. Photo Credit: Canva.

Previous ArticleNext Article