Killer robots don't have ethical principles and are prone to technical mistakes; this is why machines should not be allowed to make decisions with possible lethal outcomes without prior approval from a human being, Raul Rojas, a professor at the Free University of Berlin told Sputnik Germany.
"Eventually, someone should feel responsible for this, and not just blame it on the machines," Rojas stated.
Robots Make Mistakes
Technical errors that occur in machines are quite a common issue, the analyst argued. This is often the case when countries use drones.
While a human being can always apply his or her intuition and knowledge of the situation, the machines are incapable of doing so, Rojas claimed.
"Robots are made to only recognize standard cases, all deviations can't be programmed," the analyst noted.
Rojas also added that machines are "ridiculously easy to deceive." For example, rockets that are programmed to react to the heat by planes, can be misled if the aircraft starts dropping hot metal objects.
Ethical Concerns
Rojas is confident that such questions require human rational analysis and empathy — qualities that machines are incapable of.
"I think that it is impossible to incorporate ethical principles inside the machines," Rojas stated.
READ MORE: Chinese Robot Becomes World's First Machine to Pass Medical Exam
Rules and exceptions can be fixed in an algorithm, but for making ethical decisions, "robots require human cognitive abilities."
According to the analyst, it is impossible to create a human-like robot, even in the distant future.
"This will never happen with machines, and therefore robots will never be able to behave in accordance with any ethical considerations," the analyst concluded.