- Sputnik International
World
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

UK Prime Minister Theresa May Blames Internet for Terrorism, Internet Bites Back

CC0 / Pixabay / Internet
Internet - Sputnik International
Subscribe
Following the June 3 London Bridge attacks, UK Prime Minister Theresa May angrily pilloried the internet and internet-based services, for giving extremist ideology "the safe space it needs to breed." In response, many leading online platforms have struck back at her statements – and a rights group has said her proposals missed the point entirely.

May's speech was unambiguous — things "cannot and must not" continue as they are, and must change in "four important ways." Ranked second, after extremist Islamic ideology, and before tolerance of extremism at home and a review of existing counterterror strategies, loomed the internet.

​"We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet — and the big companies that provide internet-based services — provide. We need to work with allied, democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremism and terrorist planning. We need to do everything we can at home to reduce the risks of extremism online," she said.

​While no specific internet firm was named, several — easily befitting the label of "big companies" providing "internet-based services" — felt they had been implicitly singled out by the Prime Minister, and were moved to publish responses. The companies' underlying message was the same in each case — they effectively argued they were already doing what May demanded they must now do.

First, Google said it was committed to ensuring terrorists did not have a voice online, and working with the government and NGOs to achieve that end.

"We are already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area. We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges," the company said.

​In terms of practical measures for fighting extremism, the search giant has a policy of removing any links to illegal content from its digital banks once it identifies it — or indeed is notified of such material by users. Moreover, its streaming platform YouTube takes down any videos inciting violence, flagging them to ensure they cannot be reuploaded — and bans accounts it believes are operated by terrorist organizations.

​Similarly, social media mammoth Facebook, which boasts 30 million users in the UK alone highlighted its ongoing work on combating terrorism on its own networks. The company noted it prohibits any content supporting terrorist activity, and allows users to report potentially infringing material to human moderators. Image-matching tech is also employed to check photos and see if they've already been banned from the platform for promoting terrorism. Potential evidence of impending attacks is also forwarded to law enforcement.

"We do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it. Online extremism can only be tackled with strong partnerships — we have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together," a Facebook spokesperson said.

Twitter's response was perhaps the most biting of the trio, with a spokesperson curtly stating terrorist content had "no place" on its network, and the company was continuing to expand the use of technology as part of a systematic approach to remove such content. Twitter would, it said, "never stop working" to "stay one step ahead" and continue to engage with partners across industry, government, civil society and academia.

Moreover, the spokesperson noted the platform had suspended 376,890 accounts in the six months leading up to December 2016, of which 74 percent were detected via internal tech, and a mere two percent resulted from government requests.

​It was not merely within the tech industry that May's comments provoked ire and skepticism. Campaign organization Open Rights Group (ORG) expressed concern that the government may use the attack, and other recent atrocities, to pursue policies that are "ineffective, meaningless or dangerous." If that came to pass, ORG fears many may feel these events are "being exploited, rather than dealt with maturely."

​"What we have heard does not give us confidence that proposals will be necessary, proportionate, and ensure legal accountability. May's speech had the feel of electioneering rather than a common-sense, values and evidence based approach. That is simply not being sufficiently serious and respectful about what has happened," said ORG Executive Director Jim Killock in a statement.

Moreover, Killock said he was disappointed the government's response to the attack — as with the Westminster and Manchester atrocities — focused on encryption. This "very risky" approach, he asserted, could be counterproductive in combating extremism, pushing terrorist networks into "even darker corners" of the web, making them even harder to monitor and counteract than currently.

​He added that the internet, and companies such as Facebook, were not a cause of this hatred and violence, merely tools that can be abused by extremists. The real solution, he felt, necessitated addressing "the actual causes of extremism" — and debating internet controls risked distracting this difficult, vital quest.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала