US senators have introduced two separate bipartisan bills on artificial intelligence, in an apparent sign that both Democrats and Republicans agree that the government should be involved in addressing AI-related issues.
One bill was unveiled by Democrat Gary Peters, Chairman of the Senate Homeland Security and Governmental Affairs Committee, along with Republican Senators Mike Braun and James Lankford.
The document would require federal agencies to tell people when they are using AI to interact with them. Additionally, the bill urges the agencies to create a way for people to appeal any decisions made by AI.
Commenting on the bill, Braun said in a statement that "No American should have to wonder if they are talking to an actual person or artificial intelligence when interacting with the government. The federal government needs to be proactive and transparent with AI utilization and ensure that decisions aren’t being made without humans in the driver’s seat."
He was echoed by Peters, who stressed that "Artificial intelligence is already transforming how federal agencies are serving the public, but government must be more transparent with the public about when and how they are using these emerging technologies."
In a separate move, the upper chamber’s Democrats Michael Bennet and Mark Warner, as well as GOP Senator Todd Young rolled out a bill on establishing an Office of Global Competition Analysis to analyze the US competitiveness in AI fields as compared to other countries, including China.
"This legislation will better synchronize our national security community to ensure America wins the technological race against the Chinese Communist Party. There is no single federal agency evaluating American leadership in critical technologies like artificial intelligence and quantum computing, despite their significance to our national security and economic prosperity. Our bill will help fill this gap," Young said.
The bills come after media outlets cited the US Department of Commerce (DOC) as saying that Washington is examining the need for checking AI-based programs, such as ChatGPT, amid concerns that they can be used to commit crimes and spread misinformation.
OpenAI’s ChatGPT language model, launched in late November 2022, provoked mixed reactions due to its ability to mimic human conversations and generate unique texts based on users’ prompts. While some praised ChatGPT for its professional applications, others slammed its potential for abuse, such as students using the model to write essays.