https://sputnikglobe.com/20230726/leading-ai-companies-move-to-self-regulate-after-lawmakers-discuss-industry-controls-1112164832.html
Leading AI Companies Move to Self-Regulate After Lawmakers Discuss Industry Controls
Leading AI Companies Move to Self-Regulate After Lawmakers Discuss Industry Controls
Sputnik International
A group of tech firms behind some of the most advanced artificial intelligence (AI) programs have teamed up to form a new industry regulatory body amid fears that US lawmakers may soon impose their own regulations on the industry.
2023-07-26T19:26+0000
2023-07-26T19:26+0000
2023-07-26T19:23+0000
americas
openai
elon musk
artificial intelligence (ai)
google
microsoft
regulations
https://cdn1.img.sputnikglobe.com/img/07e4/08/1b/1080290191_416:0:1616:675_1920x0_80_0_0_51ef4742040a93ff3ab674fce1642147.jpg
The Frontier Model Forum (FMF), which says its purview is to develop safety standards for the emerging AI industry, was announced on Wednesday by Google, ChatGPT-maker OpenAI, Microsoft and Anthropic.Microsoft President Brad Smith added that "companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."FMF says any organization is free to join, so long as it is working on what it calls "frontier AI models," which it defines as "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks."Congress Seeks OversightThe announcement comes after the four FMF founding firms, plus Amazon, Meta*, and Inflection, met at the White House and pledged to voluntarily adopt safeguards for “safety, security and trust,” as US President Joe Biden put it.That includes making sure their technology is safe before releasing it to the public; safeguarding their AI against cyber threats, labeling content that has been altered or AI-generated, as well as “rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm,” according to Biden.However, voluntary self-policing is one thing, but on Tuesday, federal lawmakers held their latest hearing aimed at something more concrete: legal regulations.The panel included Anthropic CEO Dario Amodei; Yoshua Bengio, an AI professor at the University of Montreal and one of the fathers of modern AI science; and Stuart Russell, a computer science professor at the University of California at Berkeley. All three called for some kind of industry regulation, but disagreed on how it should be done.Sen. Josh Hawley (R-MO), an outspoken critic of Big Tech firms, said he was concerned about monopolization of the technology.Hawley pointed out that the tech firms pushing the forefront of AI research were “the same people” that had for years dodged other forms of congressional oversight.The European Union has also moved to regulate AI, introducing its first regulatory framework last month that bans AI judged to pose an "unacceptable risk" to people and restricting and governing what it considers "high-risk" uses of the technology. China, a leading center for AI development, is also in the process of developing a regulatory framework.Threat or Hype?Paradoxically, AI pioneers have also raised fears about the technology, but on the other side, critics say AI developers are deliberately overhyping their product’s potential in an attempt to mold the public’s image.Earlier this year, Bengio and his fellow pioneering AI developer, Geoffrey Hinton, both of whom have worked since the 1990s to lay the foundation for modern AI like ChatGPT and Bard, warned that AI was developing so quickly it could be dangerous. Speaking before Congress in May, Hinton compared the arrival of AI to first contact with an extraterrestrial species.The institute is closely tied to effective altruism, a utilitarian movement popular with entrepreneurs who claim to prioritize actions that will generate the most good for all of humanity. Twitter CEO Elon Musk is both a strong proponent of effective altruism and a board member of the Future of Life Institute, which he gave a grant in 2015 to specifically research the threat posed to humanity by AGI.Musk has criticized the leading AI research firms, calling Google co-founder Larry Page “cavalier” about the threat, although Musk was also one of the founding board members of OpenAI in 2015.
https://sputnikglobe.com/20230720/report-google-testing-unsettling-ai-tool-capable-of-writing-news-stories-1112003505.html
https://sputnikglobe.com/20230706/china-is-going-to-be-great-in-artificial-intelligence-elon-musk-tells-ai-conference-in-shanghai-1111694053.html
https://sputnikglobe.com/20230615/almost-half-of-ceos-think-ai-could-destroy-humanity-in-5-10-years---poll-1111184159.html
americas
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2023
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/07e4/08/1b/1080290191_566:0:1466:675_1920x0_80_0_0_eba5d05038cea545df71374d460b7908.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
ai; artificial intelligence; effective altriusm; fronteir model forum; google; chatgpt; elon musk
ai; artificial intelligence; effective altriusm; fronteir model forum; google; chatgpt; elon musk
Leading AI Companies Move to Self-Regulate After Lawmakers Discuss Industry Controls
A group of tech firms behind some of the most advanced artificial intelligence (AI) programs have teamed up to form a new industry regulatory body amid fears that US lawmakers may soon impose their own regulations on the industry.
The Frontier Model Forum (FMF), which says its purview is to develop safety standards for the emerging AI industry, was announced on Wednesday by Google, ChatGPT-maker OpenAI, Microsoft and Anthropic.
"It is vital that AI companies - especially those working on the most powerful models - align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible," Anna Makanju, OpenAI's vice president of global affairs, said in a statement.
Microsoft President Brad Smith added that "companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."
FMF says any organization is free to join, so long as it is working on what it calls "frontier AI models," which it defines as "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks."
The announcement comes after the four FMF founding firms, plus Amazon, Meta*, and Inflection, met at the White House and pledged to voluntarily adopt safeguards for “safety, security and trust,” as US President
Joe Biden put it.
That includes making sure their technology is safe before releasing it to the public; safeguarding their AI against cyber threats, labeling content that has been altered or AI-generated, as well as “rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm,” according to Biden.
The firms also agreed to “find ways for AI to help meet society’s greatest challenges - from cancer to climate change - and invest in education and new jobs,” he said.
However, voluntary self-policing is one thing, but on Tuesday, federal lawmakers held their latest hearing aimed at something more concrete: legal regulations.
The panel included Anthropic CEO Dario Amodei; Yoshua Bengio, an AI professor at the University of Montreal and one of the fathers of modern AI science; and Stuart Russell, a computer science professor at the University of California at Berkeley. All three called for some kind of industry regulation, but disagreed on how it should be done.
Sen. Josh Hawley (R-MO), an outspoken critic of Big Tech firms, said he was concerned about monopolization of the technology.
“I’m confident it will be good for the companies, I have no doubt about that,” Hawley said. “What I’m less confident about is whether the people are going to be all right.”
Hawley pointed out that the tech firms pushing the forefront of AI research were “the same people” that had for years dodged other forms of congressional oversight.
The European Union has also moved to regulate AI, introducing its
first regulatory framework last month that bans AI judged to pose an "unacceptable risk" to people and restricting and governing what it considers "high-risk" uses of the technology. China, a leading center for AI development, is also
in the process of developing a regulatory framework.
Paradoxically, AI pioneers have also raised fears about the technology, but on the other side, critics say AI developers are deliberately overhyping their product’s potential in an attempt to mold the public’s image.
Earlier this year, Bengio and his fellow pioneering AI developer, Geoffrey Hinton, both of whom have worked since the 1990s to lay the foundation for modern AI like ChatGPT and Bard, warned that AI was developing so quickly it could be dangerous. Speaking before Congress in May, Hinton compared the arrival of AI to first contact with an extraterrestrial species.
A letter was circulated in March by Anthony Aguirre, a professor of Physics of Information at the University of California, Santa Cruz, and executive director of the Future of Life Institute, an organization founded in 2014 to study what it claims are various existential risks to humanity, including superintelligent artificial general intelligence (AGI), asking AI developers for a six-month pause on developing the technology.
AGI is regarded as the point at which artificial intelligence surpasses human intelligence.
The institute is closely tied to effective altruism, a utilitarian movement popular with entrepreneurs who claim to prioritize actions that will generate the most good for all of humanity. Twitter CEO Elon Musk is both a strong proponent of effective altruism and a board member of the Future of Life Institute, which he gave a grant in 2015 to specifically research the threat posed to humanity by AGI.
Musk has criticized the leading AI research firms, calling Google co-founder Larry Page “cavalier” about the threat, although Musk was also one of the founding board members of OpenAI in 2015.