https://sputnikglobe.com/20230529/scientist-warns-of-looming-existential-threat-as-hyper-intelligent-ai-could-decide-to-take-over-1110772797.html
Scientist Warns of Looming 'Existential Threat' as Hyper-Intelligent AI 'Could Decide to Take Over'
Scientist Warns of Looming 'Existential Threat' as Hyper-Intelligent AI 'Could Decide to Take Over'
Sputnik International
AI pioneer Geoffrey Hinton has warned of looming existential threat posed by artificial intelligence.
2023-05-29T12:58+0000
2023-05-29T12:58+0000
2023-05-29T12:58+0000
artificial intelligence (ai)
robert hinton
beyond politics
science & tech
google
openai
elon musk
neural networks
https://cdn1.img.sputnikglobe.com/img/106372/86/1063728691_0:100:1920:1180_1920x0_80_0_0_8b7701668f9ca61ab6b2c81fdeeff215.jpg
AI pioneer Geoffrey Hinton is increasingly "unnerved" by "how smart" artificial intelligence (AI) tools are becoming. The academic, who now lives in Toronto, Canada, spent 50 years of his professional career developing cutting-edge AI. Most recently, the 75-year-old worked for Google, but quit its parent company Alphabet earlier in May. He has since been on a crusade of sorts warning of the “dangers” of the very technology that he helped to develop. In the new interview, Hinton recalled how when testing out a chat bot at Google - the PaLM model - it seemed to understand a joke he cracked. PaLM (Pathways Language Model) is a large language model developed by Google AI, with the tech giant since releasing an updated, next-generation model, PaLM 2, boasting "improved multilingual, reasoning and coding capabilities."Over the course of this interaction, it dawned on the scientist that the era when AI might be able to "outperform" humans was not that far away.'Existential Threat'Referencing chatbots like OpenAI's ChatGPT, Hinton underscored that AI was trained to understand or learn any intellectual task that a human can manage."I'm not saying it's sentient," he said of AI, but added, "I'm not saying it's not sentient either."Dismissing claims by opponents that the hue and cry over the dangers of AI were inflated, he added that this was not some science fiction problem, but rather a "serious problem that's probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now."Hinton's warning comes as a growing number of technology leaders have sounded the alarm about the potential dangers of a hyper-intelligent AI. Tesla CEO Elon Musk, AI pioneers Yoshua Hengio and Stuart Russell, along with thousands of others, signed a letter in April calling for a six-month pause on the development of more powerful AI systems. However, Hinton was not a signatory on the letter, as he did not think a pause was realistic in the current competitive world of AI.
https://sputnikglobe.com/20230502/mastermind-behind-ai-quits-google-warns-thinking-machines-pose-danger-to-humanity-1110024905.html
https://sputnikglobe.com/20230514/stakes-couldnt-be-higher-unbridled-super-intelligent-ai-could-wipe-out-humanity-1110344362.html
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2023
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/106372/86/1063728691_107:0:1814:1280_1920x0_80_0_0_42c7447e263d40ba8e98b5e4df0272da.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
ai pioneer, geoffrey hinton, warning, looming existential threat, artificial intelligence, ai, decide to take over, chat bot, google's palm model, large language model, developed by google ai, multilingual, reasoning and coding capabilities
ai pioneer, geoffrey hinton, warning, looming existential threat, artificial intelligence, ai, decide to take over, chat bot, google's palm model, large language model, developed by google ai, multilingual, reasoning and coding capabilities
Scientist Warns of Looming 'Existential Threat' as Hyper-Intelligent AI 'Could Decide to Take Over'
Geoffrey Hinton, a British-Canadian computer scientist whose efforts in the sphere of artificial neural networks earned him the nickname the “Godfather of AI,” recently joined a chorus of voices the world over warning that if and when AI becomes smarter than humans, it could have disastrous consequences.
AI pioneer Geoffrey Hinton is increasingly "unnerved" by "how smart" artificial intelligence (AI) tools are becoming.
"These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening," the man, who has received the most prestigious award in computer science and computing machinery for his research, told the hosts of a US radio show.
The academic, who now lives in Toronto, Canada, spent 50 years of his professional career developing cutting-edge AI. Most recently, the 75-year-old worked for Google, but quit its parent company Alphabet earlier in May.
“I left so that I could talk about the dangers of AI without considering how this impacts Google,” he had
tweeted.
He has since been on a crusade of sorts warning of the “dangers” of the very technology that he helped to develop. In the new interview, Hinton recalled how when testing out a chat bot at Google - the PaLM model - it seemed to understand a joke he cracked. PaLM (Pathways Language Model) is a large language model developed by Google AI, with the tech giant since releasing an updated, next-generation model, PaLM 2, boasting "improved multilingual, reasoning and coding capabilities."
Over the course of this interaction, it dawned on the scientist that the era when AI might be able to "outperform" humans was not that far away.
"I thought for a long time that we were, like, 30 to 50 years away from that. So I call that far away from something that's got greater general intelligence than a person. Now, I think we may be much closer, maybe only five years away from that," Hinton said.
Referencing
chatbots like OpenAI's ChatGPT, Hinton underscored that AI was trained to understand or learn any intellectual task that a human can manage.
"I'm not saying it's sentient," he said of AI, but added, "I'm not saying it's not sentient either."
Dismissing claims by opponents that the hue and cry over the dangers of AI were inflated, he added that this was not some science fiction problem, but rather a "serious problem that's probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now."
"They can certainly think and they can certainly understand things. And, some people by sentient mean, ‘Does it have subjective experience?’ I think if we bring in the issue of subjective experience, it just clouds the whole issue and you get involved in all sorts of things that are sort of semi-religious about what people are like. So, let's avoid that," continued the man, who has been hailed as making "foundational breakthroughs in AI” amid “decade of contributions at Google.”
Hinton's warning comes as a growing number of technology leaders have sounded the alarm about the
potential dangers of a hyper-intelligent AI. Tesla CEO Elon Musk, AI pioneers Yoshua Hengio and Stuart Russell, along with thousands of others, signed a letter in April calling for a six-month pause on the development of more powerful AI systems. However, Hinton was not a signatory on the letter, as he did not think a pause was realistic in the current competitive world of AI.
"All I want to do is just sound the alarm about the existential threat," the computer scientist concluded.