The Google-owned video site ramped up its efforts approximately two months after UK Prime Minister Theresa May called for firms to take a proactive stance on removing online extremist propaganda, after a terrorist attack on London Bridge killed eight people and injured 48.
According to CNET, YouTube is using cutting-edge machine learning technology to identify and remove more than 75 percent of controversial content before users even have a chance to flag it for inappropriateness. The improved accuracy of machine technology has more than doubled the number of videos YouTube takes down from its site, despite the large amount of content being published every minute.
The video site has expanded its counter-terrorism efforts by automatically playing videos that expose extremist recruiting myths when users search specific keywords on the site.
YouTube is also introducing a feature in the coming weeks that places videos with "inflammatory religious or supremacist content" in a "limited state" behind a warning. These videos will not qualify for comments and endorsements or be monetized or recommended.
The video site is also increasing the number of independent experts working in its Trusted Flagger program, which is a network of groups and individuals responsible for reporting videos that may violate the company's guidelines. YouTube is currently working with more than 15 new organizations, such as the No Hate Speech Movement, the Institute for Strategic Dialogue and the Anti-Defamation League.
Kent Walker, a senior vice-president at Google, said in a blogpost, "Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution. We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online. There should be no place for terrorist content on our services."