You Said What? FaceApp Racism Fiasco Highlights Issues in Machine Learning

© Photo : PixabayArtificial intelligence
Artificial intelligence - Sputnik International
Subscribe
FaceApp, a popular artificial intelligence photo-editing application that uses a neural network for selfie-editing has apologized for building a fundamentally racist algorithm - a palpable aesthetic demonstration of the risks of machines being bedeviled by human biases.

FaceApp was launched in January and quickly surged in popularity — even as of April 2017, it gains an alleged 700,000 new users daily.

​Users upload a picture of their face, and then apply a series of filters to the image to alter their appearance — options include aging and changing gender, and also "hotness." The final option is responsible for the controversy the app finds itself embroiled in — users that beautified their faces found their skin bleached white, and their noses made more European, the evident implication being the whiter someone is, the hotter.

In a statement, FaceApp founder and Chief Executive Yaroslav Goncharov said he was "deeply sorry" for the "unquestionably serious" issue.

"It's an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour. To mitigate the issue, we have renamed the effect to exclude any positive connotation associated with it. We are also working on the complete fix that should arrive soon," he added.

While the app's developers give its code cultural awareness amendments, the filter remains — although its name has been changed to "spark."

​While an embarrassment for the company, and a source of outrage for the public, the fiasco starkly underlines the inherent issue in machine learning — while computers and technology are perceived to be impartial and objective, they are in fact dependent on the data they are fed. If human biases creep into that data, machines will reflect it — an algorithm can be trained, deliberately or inadvertently, to be racist, sexist, homophobic or bigoted in any way.

Almost all modern consumer innovations employ machine learning in some way or other. Facial recognition technology, equipped on most new smartphones and adopted by every social network, is entirely dependent on the discipline, combing through millions of pieces of data and making resultant correlations and predictions about the world. Google Translate taught itself to convert a sentence in French to English, and so on. 

AILA, or Artificial Intelligence Lightweight Android, is pictured during a demonstration at the German Research Center for Artificial Intelligence GmbH (Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH) stand at the 2013 CeBIT technology trade fair on March 5, 2013 in Hanover, Germany - Sputnik International
The ‘Intelligent’ Robot That Became Racist

Predictive machine learning is both the new and next big thing — while in its infancy, it has already produced compelling results in the sphere of medicine, forecasting which drugs a patient should take, and how they will react to them.

However, increasing reliance on it in other areas has proven less effective — in August 2016, an investigation by The Century Foundation found machine learning programs that some US courts use to predict who is likely to re-offend while on bail, or after being released from prison, in pre-trial assessments overwhelmingly rated black prisoners as a higher risk than whites. This is because the US justice system has historically had a demonstrable bias towards incarcerating black criminals — a system designed to make the criminal justice system more equitable and race-blind merely ended up perpetuating human biases.

Google Translate has even been found to have gender biases in certain languages — for instance, Turkish has no gender pronouns, but the service exclusively assigns genders to particular professions. Translating "he's a florist" will actually produce the sentence "she's a florist" in Turkish — "she's a doctor" will always be turned into "he's a doctor." In essence, Google assumes a doctor will always be male, and so on. While perhaps not such an issue in the case of translation, if a similar gender bias asserted itself in job application algorithms, the implications could be serious.

​Machine learning of language can produce troubling results, however. An April study found if an algorithm learns English, it can become prejudiced against non-whites and women. For the study, researchers told a representative machine learning program to teach itself 840 billion words, and their definitions.  

​The program, as do many others, learns definitions by ascertaining how often certain words appear together in the same sentence. For instance, it quickly determines the meaning of the word bottle by recognizing it frequently occurs next to the name of liquids, and other containers. The researchers found foreign names were less frequently associated with positive words than white names, and female names were far more associated with abusive words than male names. In essence, by scouring the vast recesses of the racism and sexism infested internet, the AI came to view women and non-whites as potentially negative concepts.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала