Google has moved to explain why a top artificial intelligence researcher quit the tech giant after it ruled the publication of a paper on an important AI ethics issue she had co-authored to be a no-go.
Jeff Dean, Google's head of AI, defended the decision in an internal email to staff on Thursday, cited by the Financial Times, saying the paper "didn't meet our bar for publication", and noting the departure of Timnit Gebru, who had been co-head of the AI Ethics team at Google, came in response to the Silicon Valley company's refusal to meet some unspecified conditions she had set to stay in her position at the company.
Timnit Gebru tweeted Wednesday evening that she was fired because of an email she had sent a day earlier to company employees.
I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired :-)
— Timnit Gebru (@timnitGebru) December 3, 2020
In the email, seen by The New York Times, she expressed her disappointment over Google's response to efforts by her and other staffers to boost minority hiring and draw attention to bias in artificial intelligence at large.
"Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset", the email read. "There is no way more documents or more conversations will achieve anything".
'Silencing Marginalised Voices'
She went on to lament that there was "zero accountability" inside Google around the company's claims it wants to increase the proportion of women in its ranks. The email, first published on Platformer, also depicted the decision not to greenlight her paper as part of a process of "silencing marginalised voices".
There had already been tensions with Google management in the past over her pushing for greater diversity, the FT reported citing one of Gebru's colleagues. Yet, putting the reasons aside, the cause of her departure was purported to be the company's resolution not to allow the publication of the research paper she had co-authored, the person said.
The paper in question looked into the potential bias in comprehensive language models, a truly promising field of natural language research, whereby systems like OpenAI's GPT-3 and Google's own system, Bert, attempt to predict the next word in any utterance.
'Someone at Google Decided It Was Harmful to Their Interests'
The approach underlies effective automated writing systems, and is used by Google to better understand challenging search queries. The text-based models, which rely upon corpus linguistics, have prompted concerns they could automatically absorb racial and other biases contained in the training material largely derived from the internet.
According to Emily Bender, a professor of computational linguistics at the University of Washington, who co-authored the paper, from the outside, Gebru's departure looks like "someone at Google decided this was harmful to their interests".
"Academic freedom is very important — there are risks when [research] is taking place in places that [do not] have that academic freedom", she went on, fuming that this gives companies or governments the power to "shut down" research they don't approve of.
Bender said in their particular case, the authors hoped to further update the paper with newer research in time for it to be accepted at a conference they were going to attend. She added that it was common for such work to be superseded by newer research, given how quickly work in fields like this is moving ahead.
"In the research literature, no paper is perfect", the researcher suggested.
Blow to Reliability of Technologies
Likewise, Gebru herself also addressed the underlying reasons for her "firing":
"…with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?", the 37-year-old researcher queried in one of her latest tweets.
Built-In Bias
Gebru's departure from Google has provoked a storm of reactions, highlighting a spike in tensions between the firm's outspoken personnel and its senior management, while also raising concerns over the company's efforts to deal with social injustices (including allegations of sexual harassment) as well as build fair and reliable technology.
In particular, concerns have been raised that the people who are designing artificial intelligence systems could be building their own prejudices into the high technology.
The New York Times reported that over the past several years, a host of public experiments have shown that AI systems often interact differently with people of colour — something believed to have been caused by them being underrepresented among technical specialists.