While there are areas in which artificial intelligence (AI) may be beneficial, there are also many things linked to it that could potentially "go wrong," Sam Altman, chief executive officer of OpenAI Inc., has acknowledged.
"We work with dangerous technology that could be used in dangerous ways very frequently," Altman stated at a recent Technology Summit in San Francisco.
According to the American entrepreneur and programmer, whose OpenAI has been valued at more than $27 billion, the "benefits outweigh the costs" of the new technology. Altman singled out such areas as science, education, and medicine as offering a promising implementation field for AI advances, adding:
"I think it'd be good to end poverty, but we’re going to have to manage the risk to get there."
OpenAI's CEO Sam Altman, the founder of ChatGPT and creator of OpenAI.
© AP Photo / Alastair Grant
Altman also weighed in on ongoing calls by lawmakers to regulate artificial intelligence, saying:
"I think global regulation can help make it safe, which is a better answer than stopping it."
Sam Altman touted the huge success of OpenAI’s products, such as chatbot ChatGPT and image generator Dall-E, and insisted that his concerns about AI at this stage were "not about money."
"I have enough money... This concept of having enough money is not something that is easy to get across to other people,” Altman said, underscoring that building so-called guardrails for the use of AI is one of the "most important" steps that "humanity has to get through with technology.”
Altman also weighed in on Tesla CEO Elon Musk's recent warnings of AI's potential to do harm. Musk, who co-founded OpenAI with Altman, “really cares about AI safety a lot,” underscored the tech guru, adding that the alarm bells he had been sounding were “coming from a good place.”
Previously, hundreds of artificial intelligence researchers and technology executives signed off on a stark warning that AI poses an existential threat to humanity.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," said the statement published on May 30.
The release carried the signatures of some of the industry’s top names, including Altman, the "godfather of AI" Geoffrey Hinton, Director of the Center for AI Safety Dan Hendrycks, and top executives from Microsoft and Google.
Another public letter published in March drew the signatures of over 1,000 academics, businessmen, and technology specialists urging a pause in AI development until it can be regulated and run responsibly.