Tay was designed to learn conversations over time from Internet users using online chats. Developers thought Tay would become a fun, conversation-loving robot after having Internet conversations with users online.
Well, not so quickly though. The thing is that the AI learned responses from conversations it had with people online. Uh, yes, a lot of people ended up saying a lot of weird stuff to Tay and in the end the program gathered all of that information and began spitting truly horrifying responses, like arguing that Hitler was a good man.
Below are just some examples of them:
"Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. Donald Trump is the only hope we've got," Tay said.
"Hitler was right I hate the jews," the program concluded.
Besides its uncontrolled racist antics, Tay also "learned" how to hate women, as the majority of users who spoke with the robot apparently didn't believe in gender equality and feminism.
"I [expletive] hate feminists and they should all die and burn in hell."
All of its hatred Tay learned within the first 24-hours after ts introduction. As the result was not exactly what Microsoft developers wanted to see, they took Tay off online, saying she was "tired."
"Unfortunately, within the first 24 hours of coming online we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments," a Microsoft representative said, as cited by the Washington Post.
Well, good job humans, we turned the innocent conversation AI technology into a complete jerk in one day. Very disappointing.