Two concerned Google staff members attempted to stop the unrolling of AI chatbot Bard back in March, according to a US media report.
The product reviewers of Google’s Responsible Innovation department warned that the conversational artificial intelligence chatbot conceived as a rival to OpenAI's ChatGPT was inclined to make "inaccurate and dangerous statements," according to cited insiders.
The staff members ostensibly said that Bard could even trigger "tech-facilitated violence" via "synthetic mass harassment."
Bard is a tool designed to simulate human conversations using natural language processing and machine learning, and is based on the "Language Model for Dialogue Applications" (LaMDA). It was rolled out as a "limited experiment" in March.
According to the media report, Google's product reviewers did not hesitate in emphasizing the concerning issues linked with AI-powered large language models like Bard and OpenAI's ChatGPT when the US technology company's chief lawyer, Kent Walker, met with research and safety executives. But all of their consternation was allegedly quashed, with Jen Gennai, who leads Google's Responsible Innovation team, purportedly wading in to edit the reviewers' report. The recommendation to wait with the Bard rollout was reportedly ditched.
Gennai was cited by the US media as acknowledging that she had tweaked the report, having “corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.” According to information on Google’s website, its Bard chatbot is on course to be fully integrated into the company's search engine “soon."
In fact, Bard's release was preceded by a spate of years during which there was reportedly a great deal of internal dissent within the company regarding the risks and benefits of such AI-powered tools.
A Google engineer, Blake Lemoine, was fired by the tech giant after he voiced alarm about the possibility that LaMDA, Google’s artificially intelligent chatbot generator, could be sentient, media reported last summer. The engineer, who had been working on gathering evidence that LaMDA had achieved consciousness, was placed on administrative leave for violating the company’s confidentiality policy. Google spokesperson Brian Gabriel said in a statement that the company had reviewed LaMDA 11 times and "found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months."
OpenAI released its ChatGPT in November 2022 to a plethora of both praise and criticism, acquiring its first million users in less than a week. In March 2023, OpenAI introduced a new multimodal AI model, GPT-4, capable of recognizing both text and images, as well as solving complex problems with greater accuracy.
The competition among tech giants was on. Microsoft and Google rose to the challenge. In late January, Microsoft said it would invest "billions of dollars" in OpenAI. It unveiled its AI chatbot, incorporated into its Bing search engine, in February. Google soon followed suit, rolling out Bard.
However, there has apparently been as much enthusiasm as there has been concern over the swift advent of AI-powered tools. On March 29, SpaceX and Tesla CEO Elon Musk, along with a group of AI gurus and industry executives, called for a six-month pause in further work on AI systems potentially more advanced than chatbot developer OpenAI’s GPT-4. In an open letter, Musk, Apple co-founder Steve Wozniak, and Stability AI CEO Emad Mostaque, among other signatories, argued that this immediate pause should be public, verifiable, and include all public actors.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," the letter stated.
The European Union has since proposed legislation to regulate AI, while Italy temporarily banned ChatGPT. The Italian Data Protection Authority (Garante per la protezione dei dati personali) has spearheaded the government-imposed ban on the chatbot over privacy concerns.