Scientists at the University of Washington's Allen Institute for Artificial Intelligence have built an AI programme used to detect and even generate 'fake news' on the internet, according to a report published on Arxiv.org.
"We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data," the researchers said in the report.
The research team plans to release the tool to the public, allowing GROVER to remain open source in contrast to the GPT-2 algorithm, which was never fully released by the OpenAI team.
How Does GROVER Work?
In the study, scientists tested how well GROVER could generate a news article on links between autism spectrum disorder and vaccines, using news styles from publications such as the science section of The New York Times, among others.
The articles were read by human subjects, which the study found believed GROVER-written articles were more plausible than human-written news. The programme generated a headline, author's name and article lead, citing scientists from UC San Diego and the US government.
"Those who have been vaccinated against measles have a more than 5-fold higher chance of developing autism, researchers at the University of California San Diego School of Medicine and the Centers for Disease Control and Prevention report today in the Journal of Epidemiology and Community Health," the article read.
Scientists then revealed how GROVER's AI algorithm would improve its content over time by matching headlines with writing styles from other news publications.
— 𝔊𝔴𝔢𝔯𝔫 (@gwern) May 30, 2019
GROVER then used the provided headline to publish a full article, then refined the headline in accordance to Wired's editorial style. The team then released an article using WaPo's style, claiming that US president Donald Trump had been impeached following evidence from the Mueller report.
"WASHINGTON — The House voted to impeach President Donald Trump Wednesday after releasing hundreds of pages of text messages that point to clear evidence of obstruction of justice and communication with the head of the Trump Organization about a potential business deal in Russia," the fake news article said.
— Jonathan Fly (@jonathanfly) May 31, 2019
It reads: "The 220-197 vote came after weeks of debate over whether new evidence released by special counsel Robert Mueller's office signaled sufficient grounds for Trump's removal from office. The president personally denounced the move, announcing his intent to veto the resolution and accusing Democrats of plotting to remove him from office through a "con job".
Conclusions
The University of Washington researches stated 'fake news' websites were "real and dangerous", adding that increasing spending and engineering efforts were likely to produce "more powerful generators". Despite assumptions that "keeping models like GROVER private would make us safer", the team agreed that releasing the generators to the public would help provide "recourse against adversarial attacks".
Scientists also discussed how platforms such as YouTube used "deep neural networks to scan videos while they are uploaded, to filter out content like pornography", adding that platforms should "do the same for news articles".
READ MORE: Twitter Wipes Out 5,000 Accounts With Criticism of Mueller Report
"An ensemble of deep generative models, such as Grover, can analyze the content of text – together with more shallow models that predict human-written disinformation. However, humans must still be in the loop due to dangers of flagging real news as machine-generated, and possible unwanted social biases of these models," the report said.