Microsoft has found itself in hot water after its artificial intelligence-powered software in charge of editing stories on the company’s news website MSN was involved in an embarrassing mix-up. The AI illustrated a story about a member of a pop band, Little Mix, attending a protest in London against racial discrimination using a photo of another member of the group. Both young women are mixed-race.
Microsoft said that as soon as the company became aware of the issue it immediately took action to resolve it and replaced the incorrect image.
The news triggered an angry response from one of the singers. "MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group", Jade Thirlwall, who attended the protest against racism, wrote on her Instagram, adding that she is sick of ignorant media making such mistakes.
What the singer probably didn’t know was that it was automated software and not real-life editors that was responsible for the mix-up. The development comes a week after Microsoft announced that it would fire hundreds of journalists responsible for choosing and editing articles and replace them with artificial intelligence amid the coronavirus pandemic. The people in question did not write the stories, but took them from other news outlets and repurposed them. After stories are published on the MSN website, Microsoft shares the advertising revenue with the original news publishers.
The Guardian revealed that after the story with the mix-up made news, the remaining MSN staff was told to expect a negative article about the racial bias of artificial intelligence. However, since the software is now in charge of selecting stories, the human staff needs to delete the article, should the AI consider it interesting for the website.
This is not the first time that Microsoft’s artificial intelligence finds itself in hot water. In 2016, following its launch, a chatbot named Tay started posting racist comments and glorifying the leader of Nazi Germany, Adolf Hitler. A year later, another chatbot named Zo managed to circumvent the company’s ban on debate about religion and started talking about the Quran, which it deemed cruel, and the killing of Osama bin Laden, the al-Qaeda terrorist responsible for masterminding the September 11 attacks in the United States that left nearly 3,000 people dead.