More than 50 artificial intelligence experts, both individual and representing various institutions, are urging European officials to seek more sweeping regulations of AI technology.
The European Union’s AI Act should include general purpose AI (GPAI), the group stated in a policy brief cited by media outlets on April 13. With American computer scientist Timnit Gebru and the Mozilla Foundation among the signatories to the paper, the group argue that even general purpose AI tools, in specific settings, could pose a higher risk than they are typically associated with.
Generative AI tools like ChatGPT – a language AI chatbot developed by artificial intelligence research company OpenAI and released at the end of 2022 – were singled out by the experts. Mehtab Khan, resident fellow and lead at the Yale/Wikimedia Initiative on Intermediaries and Information, and one of the signatories, said:
“GPAI should be regulated throughout the product cycle and not just the application layer.”
She added that labels for levels of high and low risk “are just inherently not capturing the dynamism” of the technology.
Restricting the AI rules to just specific types of products, like chatbots, is not enough, the signatories warned European policymakers.
Furthermore, as Sarah Myers West, managing director of the AI Now Institute, clarified, the EU’s draft of the legislation was written before the release of tools like ChatGPT.
“The EU AI is poised to become, as far as we’re aware, the first omnibus regulation for artificial intelligence. And so given that, it’s going to become the global precedent. And that’s why it’s particularly critical that it fields this category of AI well, because it could become the template that that others are following,” Myers West said.
The AI Act is tech legislation on the agenda of the EU. Based on risk-level, It seeks to impose a broad framework regulating what products a company can bring onto the market. There are four risks linked to AI defined in the Act: minimal, limited, high, and unacceptable. But it has been tricky business defining these risks and categories.
Artificial intelligence research company OpenAI released ChatGPT in November 2022. The language tool acquired its first million users in less than a week, but has since been both praised and criticized, amid general concern over the swift advent of AI-powered tools. On March 29, SpaceX and Tesla CEO Elon Musk, along with a group of AI gurus and industry executives, called for for a six-month moratorium on further work on AI systems potentially more advanced than chatbot developer OpenAI’s GPT-4. In an open letter, Musk, Apple co-founder Steve Wozniak, and Stability AI CEO Emad Mostaque, among other signatories, argued that this immediate pause should be public, verifiable, and include all public actors.