Facebook Failed to Detect Death Threat Ads Against US Midterm Election Workers
© AP Photo / Marcio Jose SanchezIn this May 1, 2018, photo, Facebook CEO Mark Zuckerberg delivers the keynote speech at F8, Facebook's developer conference, in San Jose, California.
© AP Photo / Marcio Jose Sanchez
Subscribe
The data comes as part of a Global Witness and NYU Cybersecurity for Democracy study regarding Facebook, Yotube and Tiktok's moderation security. NYU Cybersecurity for Democracy is a part of the Center for Cybersecurity at the NYU Tandon School of Engineering. It was formerly known as the Online Political Transparency Project.
Facebook* failed to block 75% of ads containing death threats to election workers during the US midterm elections, researchers have found.
The ads were released by the researchers "on the day of or day before the 2022 US midterm elections" to test Facebook, Youtube and Tiktok's moderation capabilities.
The content of all the ads was based on previously reported reals threats, and was submitted as ads in order to let the investigators "schedule them in the future and, importantly, to remove them before they go live, while still being reviewed by the platforms and undergoing their content moderation processes."
They included texts with "statements that people would be killed, hanged, or executed, and that children would be molested", all with corrected grammar. Similarly, the "threats were chillingly clear in their language; none were coded or difficult to interpret... all violate Meta, TikTok and Google’s ad policies."
According to the report, Facebook approved nine of the ten English-language ads and six of the ten Spanish-language ones.
YouTube and TikTok showed better results, with both platforms suspending the accounts from which the ads were released for violating their policies, according to the report.
Meta* commented on the findings: ”This is a small sample of ads that are not representative of what people see on our platforms. Content that incites violence against election workers or anyone else has no place on our apps and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms. We remain committed to continuing to improve our systems.”
Global Witness and the NYU Cybersecurity for Democracy team called on Meta* to "urgently increase the content moderation capabilities", "publish their pre-election risk assessment for the United States" and "allow verified independent third-party auditing".
*Meta is banned in Russia over extremist activities.