https://sputnikglobe.com/20230411/google-staff-were-reportedly-leery-of-chatbot-bard-rollout-over-its-inaccurate--harmful-claims-1109350511.html
Google Staff Were Reportedly Leery of Chatbot 'Bard' Rollout Over Its 'Inaccurate & Harmful Claims'
Google Staff Were Reportedly Leery of Chatbot 'Bard' Rollout Over Its 'Inaccurate & Harmful Claims'
Sputnik International
Google product reviewers reportedly warned against rollout of chatbot Bard over its propensity to making 'inaccurate and harmful claims.
2023-04-11T13:31+0000
2023-04-11T13:31+0000
2023-04-11T13:31+0000
americas
us
google
artificial intelligence (ai)
google
chatbots
https://cdn1.img.sputnikglobe.com/img/105204/85/1052048566_0:60:1920:1140_1920x0_80_0_0_a02af20896329d7bdcfa14d8603157fd.jpg
Two concerned Google staff members attempted to stop the unrolling of AI chatbot Bard back in March, according to a US media report.The product reviewers of Google’s Responsible Innovation department warned that the conversational artificial intelligence chatbot conceived as a rival to OpenAI's ChatGPT was inclined to make "inaccurate and dangerous statements," according to cited insiders.The staff members ostensibly said that Bard could even trigger "tech-facilitated violence" via "synthetic mass harassment."Bard is a tool designed to simulate human conversations using natural language processing and machine learning, and is based on the "Language Model for Dialogue Applications" (LaMDA). It was rolled out as a "limited experiment" in March.According to the media report, Google's product reviewers did not hesitate in emphasizing the concerning issues linked with AI-powered large language models like Bard and OpenAI's ChatGPT when the US technology company's chief lawyer, Kent Walker, met with research and safety executives. But all of their consternation was allegedly quashed, with Jen Gennai, who leads Google's Responsible Innovation team, purportedly wading in to edit the reviewers' report. The recommendation to wait with the Bard rollout was reportedly ditched.Gennai was cited by the US media as acknowledging that she had tweaked the report, having “corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.” According to information on Google’s website, its Bard chatbot is on course to be fully integrated into the company's search engine “soon."In fact, Bard's release was preceded by a spate of years during which there was reportedly a great deal of internal dissent within the company regarding the risks and benefits of such AI-powered tools.A Google engineer, Blake Lemoine, was fired by the tech giant after he voiced alarm about the possibility that LaMDA, Google’s artificially intelligent chatbot generator, could be sentient, media reported last summer. The engineer, who had been working on gathering evidence that LaMDA had achieved consciousness, was placed on administrative leave for violating the company’s confidentiality policy. Google spokesperson Brian Gabriel said in a statement that the company had reviewed LaMDA 11 times and "found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months."OpenAI released its ChatGPT in November 2022 to a plethora of both praise and criticism, acquiring its first million users in less than a week. In March 2023, OpenAI introduced a new multimodal AI model, GPT-4, capable of recognizing both text and images, as well as solving complex problems with greater accuracy.The competition among tech giants was on. Microsoft and Google rose to the challenge. In late January, Microsoft said it would invest "billions of dollars" in OpenAI. It unveiled its AI chatbot, incorporated into its Bing search engine, in February. Google soon followed suit, rolling out Bard.However, there has apparently been as much enthusiasm as there has been concern over the swift advent of AI-powered tools. On March 29, SpaceX and Tesla CEO Elon Musk, along with a group of AI gurus and industry executives, called for a six-month pause in further work on AI systems potentially more advanced than chatbot developer OpenAI’s GPT-4. In an open letter, Musk, Apple co-founder Steve Wozniak, and Stability AI CEO Emad Mostaque, among other signatories, argued that this immediate pause should be public, verifiable, and include all public actors.The European Union has since proposed legislation to regulate AI, while Italy temporarily banned ChatGPT. The Italian Data Protection Authority (Garante per la protezione dei dati personali) has spearheaded the government-imposed ban on the chatbot over privacy concerns.
https://sputnikglobe.com/20220723/google-fires-engineer-who-fears-companys-ai-could-be-sentient-1097716315.html
https://sputnikglobe.com/20230404/just-identify-tricky-areas-bill-gates-opposes-pausing-ai-beyond-gpt-4-hypes-huge-benefits-1109119337.html
americas
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2023
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/105204/85/1052048566_161:0:1761:1200_1920x0_80_0_0_b24fdb8870d4c3d79eaf0557ad2a53dd.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
google product reviewers, google's responsible innovation team, warned against rollout, chatbot bard, propensity for inaccurate and harmful claims, trigger tech-facilitated violence, rolled out, limited experiment, language model for dialogue applications, lamda, ai-powered tool, large language models, openai's chatgpt
google product reviewers, google's responsible innovation team, warned against rollout, chatbot bard, propensity for inaccurate and harmful claims, trigger tech-facilitated violence, rolled out, limited experiment, language model for dialogue applications, lamda, ai-powered tool, large language models, openai's chatgpt
Google Staff Were Reportedly Leery of Chatbot 'Bard' Rollout Over Its 'Inaccurate & Harmful Claims'
Previously, Elon Musk, along with a group of artificial intelligence (AI) researchers, and tech executives, underscored the need for a six-month moratorium on the development of "giant AI experiments" to ensure that "the effects will be positive and their risks will be manageable."
Two concerned Google staff members attempted to stop the unrolling of
AI chatbot Bard back in March, according to a US media report.
The product reviewers of Google’s Responsible Innovation department warned that the conversational
artificial intelligence chatbot conceived as a rival to OpenAI's ChatGPT was inclined to make "
inaccurate and dangerous statements," according to cited insiders.
The staff members ostensibly said that Bard could even trigger "tech-facilitated violence" via "synthetic mass harassment."
Bard is a tool designed to simulate human conversations using natural language processing and machine learning, and is based on the "Language Model for Dialogue Applications" (LaMDA). It was rolled out as a "limited experiment" in March.
According to the media report, Google's product reviewers did not hesitate in emphasizing the concerning issues linked with AI-powered large language models like Bard and OpenAI's ChatGPT when the US technology company's chief lawyer, Kent Walker, met with research and safety executives. But all of their consternation was allegedly quashed, with Jen Gennai, who leads Google's Responsible Innovation team, purportedly wading in to edit the reviewers' report. The recommendation to wait with the Bard rollout was reportedly ditched.
Gennai was cited by the US media as acknowledging that she had tweaked the report, having “corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.” According to information on Google’s website, its Bard chatbot is on course to be fully integrated into the company's search engine “soon."
In fact, Bard's release was preceded by a spate of years during which there was reportedly a great deal of internal dissent within the company regarding the risks and benefits of such AI-powered tools.
A Google engineer, Blake Lemoine, was fired by the tech giant after he voiced alarm about the possibility that LaMDA, Google’s artificially intelligent chatbot generator, could be sentient, media reported last summer. The engineer, who had been working on gathering evidence that LaMDA had achieved consciousness, was
placed on administrative leave for violating the company’s confidentiality policy. Google spokesperson Brian Gabriel said in a statement that the company had reviewed LaMDA 11 times and
"found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months."OpenAI released its ChatGPT in November 2022 to a plethora of both
praise and criticism, acquiring its first million users in less than a week. In March 2023, OpenAI introduced a new multimodal AI model,
GPT-4, capable of recognizing both text and images, as well as solving complex problems with greater accuracy.
The competition among tech giants was on. Microsoft and Google rose to the challenge. In late January, Microsoft said it would invest "billions of dollars" in OpenAI. It unveiled its AI chatbot, incorporated into its Bing search engine, in February. Google soon followed suit, rolling out Bard.
However, there has apparently been as much enthusiasm as there has been concern over the swift advent of AI-powered tools. On March 29, SpaceX and Tesla CEO
Elon Musk, along with a group of AI gurus and industry executives, called for a six-month pause in further work on AI systems potentially more advanced than chatbot developer OpenAI’s GPT-4. In an open letter, Musk, Apple co-founder Steve Wozniak, and Stability AI CEO Emad Mostaque, among other signatories, argued that this immediate pause should be
public, verifiable, and include all public actors."AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," the letter stated.
The European Union has since proposed legislation to regulate AI, while Italy temporarily banned ChatGPT. The Italian Data Protection Authority (Garante per la protezione dei dati personali) has spearheaded the government-imposed ban on the chatbot
over privacy concerns.