Forty-two percent of CEOs surveyed at the recent Yale CEO Summit are convinced that artificial intelligence (AI) could potentially destroy humanity as soon as in five to ten years from now.
A poll was carried out at the semiannual event held by Sonnenfeld’s Chief Executive Leadership Institute for business leaders, political leaders, and scholars. The results, shared by a US media report after the virtual meeting wrapped up, were described as “pretty dark and alarming” by Yale professor Jeffrey Sonnenfeld.
119 CEOs had responded to the survey, including Coca-Cola CEO James Quincy, Walmart CEO Doug McMillion, media CEOs, leaders of IT companies such as Zoom and Xerox, along with bosses of pharmaceutical and manufacturing companies.
Of those questioned, 34% believed that the tremendous strides that AI technology is making could result in it potentially destroying humanity in ten years. A smaller number of those surveyed - 8% - were of the opinion that humankind could face such an existential threat and emerge the loser in the next five years.
Despite the hundreds of artificial intelligence researchers and technology executives having recently signed off on a stark warning that AI was fraught with the risk of mankind’s extinction, 58% were "not worried," as this could "never happen."
In a separate question, 58% of the surveyed CEOs insisted the concerns regarding AI were not overstated, while 42% were inclined to dismiss the much-peddled warnings of a potential catastrophe linked with AI’s advance overstated.
Previously, AI industry leaders and scholars signed an open letter urging swift steps to mitigate the risks ostensibly linked with it. The letter was signed by some of the industry’s biggest players, with signatories including OpenAI CEO and ChatGPT creator Sam Altman, Geoffrey Hinton, the "godfather of AI," Dan Hendrycks, director of the Center for AI Safety, top executives from Microsoft and Google and Microsoft.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," said the statement published on May 30.
Screenshot of Twitter post by Dan Hendrycks, director of the Center for AI Safety.
© Photo : Twitter
Dan Hendrycks tweeted that the situation was "reminiscent of atomic scientists issuing warnings about the very technologies they've created."
Screenshot of Twitter post by Dan Hendrycks, director of the Center for AI Safety.
© Photo : Twitter
The current open letter was preceeded by an April message, which was signed by Tesla CEO Elon Musk and a handful of other prominent figures in the field, advocating for a pause in AI research.