Radio
Breaking news, as well as the most pressing issues of political, economic and social life. Opinion and analysis. Programs produced and made by journalists from Sputnik studios.

Military Robots

Military Robots
Subscribe
You've probably heard the old Latin phrase "If you want peace, prepare for war". Opponents of this idea understandably argue that it breeds unnecessary militarization, facilitates arms races and generally propels the society to a more tense state with resources being spent on the military, instead of peaceful development.

And yet here we are — it would be false to say that at least some of the technologies we now enjoy for peaceful use did not stem from military developments — such as, say, the microwave. And it's no surprise that the military are at the forefront of developing robotic technology. In fact, military robots are older than you might think. For example, Germans had Goliath tracked mines and the USSR had remote-controlled tanks in World War II — but neither had proven to be very effective.

Although to call them robots in the purest sense would be wrong. Let's remember the famous three laws of robotics written by Isaac Asimov.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, these come from science fiction and aren't real laws. And yet they're now treated as such in numerous works of fiction by other authors — and sometimes, by scientists, who acknowledge the need to protect humans from robots, especially in future, as they will (maybe) become more autonomous and capable.

Meanwhile, most military "robots" are remotely controlled devices. The most prominent example are aerial drones, often used by the US military. The controversy surrounding the practice is well known. And while using remotely controlled devices capable of killing humans is at least somewhat regulated, what happens when these robots are completely autonomous? Who is responsible for them? Such robots already exist — essentially, turrets on tracks which can move, detect people, warn them and shoot them down — all without direct operator control. While the technology is here, legislation and ethics are not.

For example, in 2009, professor Noel Sharkey of the University of Sheffield expressed some of the concerns with the trend, noting robots' inability to make difficult decisions, such as determining friend from foe and making proportional actions — judging how much force to use to gain a given military advantage.

Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas… Robots do not have the necessary discriminatory ability. They're not bright enough to be called stupid — they can't discriminate between civilians and non-civilians; it's hard enough for soldiers to do that. And forget about proportionality, there's no software that can make a robot proportional.

In other words, there is no safe way for robots to determine whether they should level a city block or shoot only a few soldiers. Modern programming is simply not capable of giving them such capabilities. For now, it remains uncertain whether robots can be given such a high degree of, shall we say, understanding of the situation — and if we answer the question of "can", there's still the question of "should".

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала