Earlier this year, Facebook news aggregator Trending Topics came under fire for allegedly promoting specific information based on the preferences of its all-too-human editors. The social media giant replaced the team of human editors with a piece of software that, it was hoped, would sidestep inherent bias. Facebook users and tech pundits remain unimpressed.
As TechCrunch put it, "This is hardly the first time questions of bias have arisen in the realm of machine learning and AI — and it won't be the last."
As the authors of any number of books of speculative fiction have promulgated, machines are built by humans, and devices inherit the biases of their creators. According to Robyn Caplan, a research analyst at Data & Society, a research group that studies digital communications systems, "Algorithms equal editors. […] Humans are in every step of the process — in terms of what we're clicking on, who's shifting the algorithms behind the scenes, what kind of user testing is being done, and the initial training data provided by humans."
There have been several cases in which AI services have demonstrated their limits. Last year Google's photo app notoriously recognized two black people as ‘gorillas.' Recently, ProPublica software assessed black defendants as being at a higher risk of recidivism. And just last month, the first beauty contest judged by an AI sparked a debate after it was noticed that the system did not like people with dark skin. Among the 44 winners, almost all were white.
As of today, an AI mirrors the cultural values of its creators. Although preventing bias is not easy, machine learning has the potential to operate without bias, if programmers can work directly for accuracy.
"The great promise of machine learning is that it's better at making decisions than humans," TechCrunch stated, meaning that, in theory at least, it is "faster, more efficient, less prone to error."