MOSCOW (Sputnik), Tommy Yang — In late August, a group of leading global AI researchers, including 116 founders of robotics and artificial intelligence companies from 26 countries, issued an open letter urging the United Nations to urgently address the challenge of lethal autonomous weapons and ban their use internationally.
The letter was released by its key organizer, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, at the opening of the International Joint Conference on Artificial Intelligence 2017 in Melbourne, the world’s prominent gathering of top experts in AI and robotics.
THERE SHOULD BE DIFFICULT BARRIERS TO WAR
While many politicians defended the use of lethal autonomous weapons as they could help save human lives in a military conflict, the Australian expert told Sputnik that the lowered cost of starting a war could be a bad thing, because wars are supposed to be costly.
"If we feel we can do this [getting involved in a military conflict] without risking human lives, maybe this lowers the barrier to war. And that’s a very bad thing. There should be very difficult barriers to war. War should be a massive loss. We should be discouraging it. It should be that politicians have to explain why our sons and daughters are coming home in body bags," Walsh told Sputnik.
"It’s a rather short-sighted argument. It ignored the fact that all the civilians and other people got caught up in the crossfire. Maybe you have taken your people out of the battlefield; you’re not taking the civilians out of the battlefield. We probably have been drawn into these conflicts in Iraq or Afghanistan, because we thought we could fight without putting military boots on the ground. It’s a misconception that we could actually fight without risking soldiers’ lives," he said.
The Australian scholar elaborated that if future wars will be with robots fighting robots, humans won’t need to fight wars anymore because the result can be decided by a game of chess.
'DUMB ROBOTS' CAUSE MORE WORRIES
Compared to super smart robots and AI technology we see in Hollywood movies, the Sydney-based AI expert expressed concerns that it’s the "dumb robots" that makes him worry the most.
Walsh noted that the UK’s Ministry of Defense said it may actually remove humans from the loop of Predator-like drones, which is technically possible today.
"It wouldn’t be very capable, but it will still be able to commit a lot of harm. We have already seen the fact that the Predator drones are actually killing a lot of the wrong people, even with humans in the loop. It’s not difficult to do that with fully autonomous drones," the expert said.
In 2016, former US president Barack Obama admitted that drone and other airstrikes had killed between 64 and 116 civilians during his administration, a figure that is widely criticized as under-representing the loss of innocent civilian lives during those strikes.
NO HUMAN-ENSLAVING EVIL AI
Despite popular plots in Hollywood Sci-Fi movies were super smart AIs often try to conquer the human race, just like what Skynet tried to do in The Terminator, world-leading AI researchers dismissed such plots because they lack a basic understanding of AI technologies.
There have been several initiatives seeking to regulate the development of AI technologies. In December 2016, Dmitry Grishin, former chairman of Mail.ru group, proposed a draft law on robots, based on the Three Laws of Robotics conceived by Russian-born US Sci-Fi novelist Isaac Asimov in a short story named "Runaround" in 1942.
According to Asimov’s Laws, a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Russia's State Duma, the lower house of parliament, plans to introduce a new legislation regulating relations between humans and AI in the near future, Vyacheslav Volodin, the speaker of the State Duma, said Monday.
"Relations between humans and AIs, the relations between humans and robots are the issues that we should define legally in the near future. This issue is on our agenda," Volodin said.
But the Swiss AI expert argued that the regulation in the field would be difficult to put in place.
The Swiss entrepreneur’s LSTM AI algorithm is now being used in 3 billion smartphones globally. He believes that AI in the future will not have a goal conflict with humans because they will realize that all the sources are out there in space, as less than one-billionth of all sunlight hits the earth. The AI will be ready to emigrate to outer space, which is impossible for humans, as AI can travel by radio just like how the algorithm is transmitted in his own labs, the Swiss expert explained.