'There Should Always Be a Human Accountable for Any Action of a Robot' - Prof

© AFP 2023 / BEN STANSALLA robot named 'T-800 Endoskeleton robot' used during the filming of Salvation, part of the US Terminator film franchise is on view at the ROBOT exhibition at the Science Museum in London on February 7, 2017.
A robot named 'T-800 Endoskeleton robot' used during the filming of Salvation, part of the US Terminator film franchise is on view at the ROBOT exhibition at the Science Museum in London on February 7, 2017. - Sputnik International
Subscribe
Granting a legal status to robots is nonsensical and not pragmatic. This is what leading experts in robotics, AI, ethics, law and medical science told the European Commission in an open letter. The warning comes following the European Parliament's approval last year of a resolution that would grant legal status to autonomous robots.

Sputnik discussed the legal status of robots with Noel Sharkey, professor emeritus of AI and robotics at the University of Sheffield and co-signatory of the letter to the European Commission.

Sputnik: Why is granting robots a legal status considered to be a violation of human rights?

Robots in a lab of a doll factory - Sputnik International
Dolls to the Rescue! Inventor Explains How Robot Sex Saved His Marriage
Noel Sharkey: The robot could be sued if it creates harm or causes a misdemeanor, for instance, if an autonomous car, which is a robot, causes an accident on the road, the robot itself would be legally responsible. You would have to sue that car and the car would have to accrue funds and give money compensation for this; this seems really, really bad, because what I'm really for, and I'm a co-director for the Foundation for Responsible Robotics, and we believe that there should always be an accountable human for any action of a robot. That person is legally responsible for the actions of that robot, otherwise companies can slide out of their responsibility.

Sputnik: When robots become more and more developed and as we move forward expanding AI capabilities, how can a person be de facto responsible for a robot if it's making autonomous decisions which are basically impossible to predict?

Noel Sharkey: We're not there yet, and that's one of the things we should be able to predict, any decisions that impact on people's lives. There is some fantasy about where the technology is going and how quickly it's going there, this is not supported by evidence at the moment, we're seeing a lot of artificial intelligence expansion through the business community but this is still using techniques developed in the 1980s. Most of the work is a machine-learning using big data; we have not seen any independent intelligence from machines, so why would we try to develop laws on the idea of a future speculation about technology.

Sputnik: How do you see this moving forward? The current law, if we look at Belgian law, says that this would be stipulated in article 1384 of the Belgian civil code for things under custody. Do you think it's an appropriate way to classify robots, especially robots that have machine-learning capabilities and can make some autonomous decisions?

Noel Sharkey: We're having a slight distortion on the word autonomous and autonomous decisions because in robotics the word autonomy came about when computers became small enough to go on a robot, it was as simple as that. It has nothing to do with philosophical autonomy, political autonomy or free will; so an autonomous robot is a programmed device that runs under a program in the real world, and it looks as if it's making autonomous decisions but these decisions have been set up by a human.

Sputnik: What kind of status do you think is optimal for robots?

SwarmDiver by Aquabotix - Sputnik International
Swarming & Smart: Aquatic Drones and Their Potential Military Applications
Noel Sharkey: Well for me a robot is no different than an electric toothbrush or a refrigerator; it's an autonomous device that runs on programs and by keeping humans responsible it focuses the human mind on making sure that they can predict what their device will do. If it is knowable what their device can do, then they're completely responsible legally, there are possibilities that we couldn't know what the device would do, not though their own fault and then the law's quite different. So all I'm saying is by keeping humans at the forefront of this, I'm not talking about insignificant decisions, if you have a train going down the track and the red light comes on and it decides to stop, that's not really a decision that would impact on anybody's life, when my microwave switches itself off because the food has got really hot there's artificial intelligence in it, that it's not a decision that impacts on people‘s lives. But when a decisions can impact on people‘s lives in terms of their job, their human rights, or even the life-and-death decisions, that's when the human should be accountable.

Sputnik: How soon do you think we're going to see machines and robots that can make their own decisions?

Noel Sharkey: It's happening already, it's happening in the financial sector all the time. I don't like to think of inanimate machines as making decisions, I like to think of them, use the term "we are delegating decisions to them," because we've put the decisions in their algorithms. There are areas, for instance, in conflict and policing, where robots are being increasingly developed to kill people, or to harm them, or to stun them, or hurt them in different ways to prevent their actions, and here again we need full human accountability for that.

Sputnik: Some scary things when you say that there's already robots being trained, used to harm people. Is it really frightening perspective? Stephen Hawking seemed quite concerned, he thought that AI was a big threat to humankind?

Artwork depicting a view from RemoveDEBRIS from its Vision Based Navigation system. - Sputnik International
Space Maid: Robot Harpoon and Net System to Attempt Space Cleanup
Noel Sharkey: I don't have the concern of some people that the machines are going to rise up and kill us all; I can't see that for a very long time if ever. Machines are not motivated. When you get something like the most complex game in the world, is the game of "Go" and a  neuro learning system beat the world‘s best players a couple of months ago, and that was a remarkable engineering achievement, but this a game with set rules; it's not in the real world. This kind of artificial intelligence is called "narrow artificial intelligence," it can do one task. The holy grail of artificial intelligence is called general artificial intelligence and we haven't even begun to make a start on that yet, so it might happen one day, I'm a bit skeptical,  so we don't really know and I'm not really worried about that. What I'm worried about is not artificial intelligence, but human stupidity.

 

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала