https://sputnikglobe.com/20211018/super-efficient-facebooks-ai-technology-to-scrap-hate-speech-doesnt-work-report-says-1090011435.html
Super-Efficient? Facebook's AI Technology to Scrap Hate Speech Doesn't Work, Report Says
Super-Efficient? Facebook's AI Technology to Scrap Hate Speech Doesn't Work, Report Says
Sputnik International
Facebook has repeatedly claimed that most of the hate speech and violent content on the platform is removed by the company's "super-efficient" AI before users... 18.10.2021, Sputnik International
2021-10-18T14:01+0000
2021-10-18T14:01+0000
2022-11-03T18:21+0000
mark zuckerberg
whistleblower
testimony
hate speech
artificial intelligence (ai)
engineers
us
us congress
facebook
https://cdn1.img.sputnikglobe.com/img/07e5/0a/12/1090012022_0:129:3187:1921_1920x0_80_0_0_3fa7a475683123c5c46d691fd1423480.jpg
Facebook's artificial intelligence (AI) technology to identify and remove posts containing hate speech and violence actually does not work, according to internal company documents seen by The Wall Street Journal (WSJ).The engineer estimated that Facebook's automated systems scrapped posts that generated merely 2% of the hate speech views that violated its rules.The claims echoed those by another team of Facebook employees who previously argued that AI systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook's policies against violence and incitement.In 2020, Facebook CEO Zuckerberg expressed confidence that the platform's AI would be able to take down "the vast majority of problematic content". He spoke as the social networking giant claimed that most hate speech is taken down from the platform before users even see it.According to Facebook's recent report, the hate speech detection rate currently stands at 97%.Another Facebook Whistleblower Ready to Testify in CongressAs for the WSJ report, it comes after former Facebook data scientist Sophie Zhang told CNN last week that she is ready to testify against her former employer before Congress.The woman was fired from Facebook in August 2020, after she posted a 7,800-word memo, in which Zhang detailed how the company allegedly failed to do enough to tackle hate and misinformation, especially in developing countries. In the memo, Zhang wrote: "I have blood on my hands", insisting that she was officially being fired from Facebook over "poor performance".Her CNN interview followed congressional testimony by another Facebook whistleblower, Frances Haugen, who argued that the company knew it had inflicted harm on the mental health of teenagers, but didn't do much to stop content promoting "hate and division", as well as content that created a toxic environment for teenage girls.The social network claimed Haugen's accusations "don't make sense", with Zuckerberg stressing the company cares "deeply" about users' safety-related issues.
https://sputnikglobe.com/20200909/facebook-creates-instagram-equity-team-amid-offensive-against-hate-speech-1080414077.html
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2021
Oleg Burunov
https://cdn1.img.sputnikglobe.com/img/07e4/09/0b/1080424846_0:0:2048:2048_100x100_80_0_0_3d7b461f8a98586fa3fe739930816aea.jpg
Oleg Burunov
https://cdn1.img.sputnikglobe.com/img/07e4/09/0b/1080424846_0:0:2048:2048_100x100_80_0_0_3d7b461f8a98586fa3fe739930816aea.jpg
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/07e5/0a/12/1090012022_227:0:2958:2048_1920x0_80_0_0_57241e18262c705390528a5643fb71f2.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
Oleg Burunov
https://cdn1.img.sputnikglobe.com/img/07e4/09/0b/1080424846_0:0:2048:2048_100x100_80_0_0_3d7b461f8a98586fa3fe739930816aea.jpg
mark zuckerberg, whistleblower, testimony, hate speech, artificial intelligence (ai), engineers, us, us congress, facebook
mark zuckerberg, whistleblower, testimony, hate speech, artificial intelligence (ai), engineers, us, us congress, facebook
Super-Efficient? Facebook's AI Technology to Scrap Hate Speech Doesn't Work, Report Says
14:01 GMT 18.10.2021 (Updated: 18:21 GMT 03.11.2022) Facebook has repeatedly claimed that most of the hate speech and violent content on the platform is removed by the company's "super-efficient" AI before users even see it.
Facebook's artificial intelligence (AI) technology to identify and remove
posts containing hate speech and violence actually does not work, according to internal company documents seen by
The Wall Street Journal (WSJ).
The newspaper argued in a report that the documents include a mid-2019 note, in which a Facebook senior engineer said that the problem is that "we [the company] do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas".
The engineer estimated that Facebook's automated systems scrapped posts that generated merely 2% of the hate speech views that violated its rules.
"Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term", he wrote.
The claims echoed those by another team of Facebook employees who previously argued that AI systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook's policies against violence and incitement.
In 2020, Facebook CEO Zuckerberg expressed confidence that the platform's AI would be able to take down "the vast majority of problematic content". He spoke as the social networking giant claimed that most hate speech is taken down from the platform before users even see it.
According to Facebook's recent report, the hate speech detection rate currently stands at 97%.
Another Facebook Whistleblower Ready to Testify in Congress
As for the WSJ report, it comes after former Facebook data scientist Sophie Zhang told CNN last week that she is ready to testify against her former employer before Congress.
The woman was fired from Facebook in August 2020, after she posted a 7,800-word memo, in which Zhang detailed how the company allegedly failed to do enough to tackle hate and misinformation, especially in developing countries. In the memo, Zhang wrote: "I have blood on my hands", insisting that she was officially being fired from Facebook over "poor performance".
9 September 2020, 21:56 GMT
Her CNN interview followed congressional testimony by another Facebook whistleblower, Frances Haugen, who argued that the company knew it had inflicted harm on the mental health of teenagers, but didn't do much to stop content promoting "hate and division", as well as content that created a toxic environment for teenage girls.
The social network claimed Haugen's accusations "don't make sense", with Zuckerberg stressing the company cares "deeply" about users' safety-related issues.