- Sputnik International
World
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

The Primal Instinct: AI Machines of the Future Could Feel Fear

© Photo : PixabayRobotics
Robotics - Sputnik International
Subscribe
As artificial intelligence (AI) becomes more intelligent and outperforms humans on many levels, researchers remain on a quest to capture emotions that contribute to another level of development, including fear. Tech experts are attempting to make AI robots and machines feel 'fear.'

A paper currently under review by academics at the International Conference on Learning Representations (ICLR) explores the possibility of building the fear factor into AI machines.

The paper's abstract says: "We might hope for an agent that would never make catastrophic mistakes. At the very least, we could hope that an agent would eventually learn to avoid old mistakes. Unfortunately, even in simple environments, modern deep reinforcement learning techniques are doomed by a Sisyphean curse." 

Computer brain - Sputnik International
Fact and Fiction Behind the Threat of 'Killer AI'

The abstract continues to suggest that AI 'agents' are just as likely to forget new experiences as they are to understand and remember a new one. "Consequently, for as long as they continue to train, state-aggregating agents may periodically relive catastrophic mistakes."

​The scientists attempted to induce fear in agents to train them to avoid dangerous situations. Their paper argues that if AI machines can be rewarded for making good decisions — they can be punished for making wrong ones — and therefore fear the consequences of those actions or decisions.

Using Deep Reinforcement Learning (DRL) AI machines are trained to make good decisions by chasing rewards.

"If you stray too close to the edge of a roof your fear system kicks in and you step away to stop yourself from falling over. But if the fear you feel is so powerful and it stays with you, you might develop an irrational fear where you never step foot on a roof ever again," Zachary Lipton, co-author of the paper and researcher at UC San Diego told The Register.

The academics suggest that DRL can be reversed, so if machines can be rewarded for making good decisions, so they can be for making wrong ones, and learn to perhaps fear the consequences.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала