Joaquin Candela, the company’s director of applied machine learning, said during a news conference that the company has been increasingly using artificial intelligence to moderate offensive content. The algorithm that they are using, according to Candela, currently “detects nudity, violence, or any of the things that are not according to our policies.”
In June, Facebook began using automation to flag and remove uploaded extremist video content, but there are currently no algorithms to detect the content in live video feeds. Social media platforms have been under tremendous pressure from governments to quickly remove violent propaganda from organizations such as Daesh.
The platform also uses automation for user reports, recognizing duplicate reports and routing flagged content to reviewers who have expertise in the content’s subject matter, Reuters reports.
“One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down,” he said.
Facebook is also planning to start using automation to identify “fake news,” after Democratic outcry claiming that alternative media stories containing false information helped Donald Trump win the election.
Platforms that use automation to remove posts are generally quiet about their methods.
“There's no upside in these companies talking about it,” Matthew Prince, chief executive of content distribution company CloudFlare told Reuters in June. “Why would they brag about censorship?”