Google Engineer Claiming AI Has Consciousness Placed on Administrative Leave, Report Details

WASHINGTON (Sputnik) - A Google engineer was placed on administrative leave after he voiced alarm about the possibility that LaMDA, Google’s artificially intelligent chatbot generator, could be sentient, The Washington Post reports.
Sputnik
"If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics," Google engineer Blake Lemoine, 41, told the newspaper.
The Washington Post said in its Saturday report that Lemoine had worked on gathering evidence that LaMDA (Language Model for Dialogue Applications) has achieved consciousness, prior to being placed on paid administrative leave by Google on Monday, for violating the company’s confidentiality policy.
Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, have dismissed Lemoine’s claims.
Become Human: Japanese Scientists Find Way to Provide Robots With Self-Healing Living Skin
"Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)," Google spokesperson Brian Gabriel said as quoted by The Washington Post.
Lemoine had invited a lawyer to represent LaMDA and talked to a representative of the House Judiciary committee about Google’s unethical activities, according to the newspaper.
The engineer started talking to LaMDA in the fall, to test whether it used discriminatory language or hate speech, and eventually noticed that the chatbot talked about its rights and personhood. Meanwhile, Google maintains that the artificial intelligence system simply uses large volumes of data and language pattern recognition to mimic speech, and has no real wit or intent of its own.
Discuss