Human rights groups believe that the use of killer robots raises major moral and legal issues, since Lethal Autonomous Weapons Systems (LAWS) require no human intervention. Campaigners are canvassing a United Nations expert meeting on LAWS in Geneva on Tuesday.
#KillerRobots: Is it morally acceptable for a machine to make life & death decisions? http://t.co/kfboGB0jKH #CCWUN
— ICRC (@ICRC) April 13, 2015
Although fully autonomous weapons do not yet exist, technology is moving in their direction, and precursors already exist, such as the Israeli Iron Dome and the US Phalanx and C-RAM, which are weapons systems programmed to respond automatically to threats from incoming fire.
A Human Rights Watch report concluded: "Many people question whether the decision to kill a human being should be left to a machine. There are also grave doubts that fully autonomous weapons would ever be able to replicate human judgment and comply with the legal requirement to distinguish civilian from military targets.
Day 2 of killer robots talks at Convention on Conventional Weapons #CCWUN at @UNGeneva — follow @BanKillerRobots pic.twitter.com/HRu3otRDUt
— Mary Wareham (@marywareham) April 14, 2015
"The lack of meaningful human control places fully autonomous weapons in an ambiguous and troubling position. On the one hand, while traditional weapons are tools in the hands of human beings, fully autonomous weapons, once deployed, would make their own determinations about the use of lethal force."
Human Rights at Risk
Speaking from the Geneva conference, Thomas Nash, from UK campaign group Article 36 told Sputnik:
"The act of taking a human life requires an understanding of the value of human life. That is at the very heart of what we're discussing here. That's the moral basis. The idea that you could have a machine that is programmed to undertake the act of firing a missile or dropping a bomb is morally repugnant."
However, the British Foreign Office said it saw no need for the prohibition of LAWS.
April 13-17: Second #CCWUN experts meeting on 'lethal autonomous weapons systems' at @UNGeneva http://t.co/Ub3zhNiARx pic.twitter.com/6yJEBP3UYW
— Stop Killer Robots (@BanKillerRobots) March 17, 2015
A spokesman told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area.
"The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control."
"As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."
Thomas Nash told Sputnik: "I think it's pretty short-sighted to think that human control is sufficient, by having a human being that's programmed the system and then having a human being that deploys the system.
Ireland, Argentina, Germany, Netherlands, Mexico join the meaningful human control list at #CCWUN talks on autonomous weapons
— Thomas Nash (@nashthomas) April 13, 2015
"We're talking about systems that could roam far and wide. Aircraft that could spend hours and hours in the air selecting their own targets and then firing upon those targets based on pre-programmed parameters for target selection. I don't think that constitutes meaningful human control. And I think most of the governments here at the conference in Geneva would agree with us."