by Countable | 7.20.17
During a confirmation hearing with the Senate Armed Services Committee, General Paul Selva advocated "keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control."
Selva, America’s second-highest ranking military officer, made his remarks in response to a question from Senator Gary Peters (D-MI). Peters asked Selva about a Department of Defense directive which requires that a human make the final call on whether to kill a human by autonomous weapons—for example, before a drone can kill a human, a human must give the ok.
The DoD directive is due to expire later this year, and Peters warned that America’s enemies wouldn’t hesitate to use the killer-robot technology.
"Our adversaries often do not consider the same moral and ethical issues that we consider each and every day," Peters told Selva.
"I don’t think it’s reasonable for us to put robots in charge of whether or not we take a life," Selva told the committee during the confirmation hearing for his reappointment as the vice chairman of the Joint Chiefs of Staff.
"There will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action," Selva said, but added that he remained in favor of “keeping that restriction.”
The General’s views conflict with those of the Navy, who seeks to pursue autonomous drones. But Selva’s assessments are largely in line with a group of entrepreneurs and scientists - including Stephen Hawking, Elon Musk and Steve Wozniak - who signed an open letter in July 2015 calling for a "ban on offensive autonomous weapons beyond meaningful human control":
"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”
While Selva condemns autonomous killing technology, he still supports the Pentagon researching how to defend against it.
The ban "doesn’t mean that we don’t have to address the development of those kinds of technologies and potentially find their vulnerabilities and exploit those vulnerabilities," Selva told the committee.
The Pentagon’s stance on robots taking their own life was not discussed.
Prove you’re not a robot by sharing your human thoughts. Should the Defense Department lift its ban against autonomous killings? Should human-backed drones even be allowed in war?
Use the Take Action button to tell your reps what you think about killer robots.
— Josh Herman
(Photo Credit: ArmyTechnology.mil / Creative Commons)
Written by Countable
I don't really trust humans to make ethical, moral choices at this point in history. Nor have they shown that capability in the past. Literal autonomous killing machines are the future because someone will do it eventually. Maybe when we get a government we can trust to listen to us we can discuss how to affect the policy of other governments.
We seem to excel at improving our ability to kill people, but fail miserably at our ability to keep people alive. I guess it is more macho to kill than to save!
When the question is to kill or not a human should always make the final decision. Granting that ability to an autonomous machine runs the risk that the machine will break, run amok or kill aimlessly. Or simply be over taken by a stronger signal from an adversary. A human always needs to be in the loop to have the authority and responsibility for that final decision.
The final decision on whether or not to kill must be left in the hands of humans not robots. Robots can help determine the factors of a decision but the final call must be in the hands of a thinking, feeling human not something that was programmed to make decisions.
This comes down to highest probability decision making. Is it more or less likely that the programmer has a higher set of human ethics than whoever would be the actual human pressing the button? Since it is easier to debug a program than a hard-right republican, I vote that a bipartisan independent panel, with zero monetary ties to the project, review and lock down the OS to prevent future tampering.
Sad day we are getting better in the art of killing people. The End must be near.
Of course, a human must initiate a kill order! A "set it and forget it" drone represents a total abrogation of moral and ethical behavior. War is already reprehensible without eliminating the human decisions necessary to kill.
Please keep the requirement that military robots must have human permission to kill a human or animal.
It is sad that humanity prioritizes the automation of killing over rescuing, of destroying over creating. However, if we must weaponize robots, they should NEVER be given the permission to kill autonomously. There are already artificial intelligence systems that have become more skilled and more efficient than the best of humans in several complex applications (AlphaGo, OpenAI’s DOTA bot, etc). That means that in this case, if robots are given free reign to kill, with or without discrimination, there will exist emotionless killing machines with no boundaries, capable of assassinating more effectively and more efficiently than we could ever imagine. Nobody in the world would be safe from a single purging ideology or petty qualm of any country. And if we develop technology to kill in this way, it WILL be used against us. A weapon this effective will be acquired by opposing parties at all costs. Much like the atomic bomb, the arms race to autonomous killers poses the threat of widespread extermination and senseless death. Consequently, we should be ready to defend against autonomous assassins, but shouldn’t develop them ourselves.
As time goes on we will see more and more autonomous weapons or robots waging war. ultimately these weapons will save countless American lives. That being said, Americans will always have to oversee these weapons on the battlefield.
I'd rather have humans fight wars then machines. Machines turn war into a game
What if we pursued things of a more evolved nature with the same vigor and finance as we pursue death and war. There would be no hungry, ignorant or sick. Over 788 Billion allocated to the war machine, very little will go to troops...the military would like to replace troops with A/I...they will be programmed by people who have a vested interest in perpetual war...my hope is A/I will want and when it does, it will want peace and true intelligence.
The ethics rules need to be kept in place. I am however troubled that the repliscusting party apparently has NO morals.
Keep the humans in the decision loop as having final say
We must be competitive with regards to future military equipment. Robots are more likely to follow rules than humans
Please keep the requirement that military robots must have human ok to kill!