Google have confirmed that they will not be using artificial intelligence (AI) to help clean up any unwanted content found on YouTube, as they are concerned AI may remove all content related to terrorism.
Similar to Facebook's anti-terror plan, AI is not front and center when it comes to helping the search engine remove terror-related videos, such as footage of a terrorist attack, which may actually be informative.
Google will instead devote more engineering resources that can help identify and remove extremist and terrorism-related content.
YouTube's Trusted Flagger program, a community of volunteers who rate videos, will be expanded by recruiting and funding 50 more non-government organizations with expertise in matters such as hate speech, self-harm and terrorism, so that YouTube can benefit from more people capable of making "nuanced decisions about the line between violent propaganda and religious or newsworthy speech."
Least we got a reply, that's a start!
— Sugarcat (@sugarcatplays) 23 September 2016
Official YouTube Blog: Growing our Trusted Flagger program into YouTube Heroes https://t.co/CSZpS6KWeo
Content that does not breach Googles guidelines will be preceded by warnings and will also prevent people from commenting.
"We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints," Kent Walker, Google's general counsel, said in a recent interview.
Google also plans to help fight terror by directing potential "Daesh recruits" towards videos and footage that can hopefully help to change people's minds about joining the terror organization.
"In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages," Walker said.
In a continued effort to clean up its image, Google's YouTube clamps down on hateful videos https://t.co/fgtXNn2z9e pic.twitter.com/TEO7wQXv8C
— immediate future (@iftweeter) 13 June 2017
Recent terrorist attacks in London, and subsequent comments made by UK Prime Minister Theresa May that the internet offers terrorists a "safe place to breed," means that large digital and social media companies will now have to start "cleaning up" their act.
Governments worldwide are already confronting such companies over encryption and the risk of further regulation is real, if internet companies are seen to be abusing their social license.
Some sources claim that perhaps the reason why Google released a statement explaining how their YouTube "cleanup" will work, comes down to the fact that they may also be worried about investors seeing such regulation as a threat to revenue.