Why is Complete AI ‘Text Generator’, Previously Deemed ‘Too Dangerous’, Being Released Now?

CC0 / Pixabay / Artificial intelligence
Artificial intelligence - Sputnik International
In February, researchers from the artificial intelligence group OpenAI said that application of a GPT-2 Al model could potentially be dangerous due to its power to create misleading content. As a result, they only released a limited version of the tool. Since then, the organisation has made the complete version fully available to the public.

An AI model known as GPT-2 was presented in February 2019 by research organisation OpenAI founded among others by Elon Musk. The model is trained to generate convincing text samples from multiple web sources, by predicting what words will come next, even when it is given only a small portion of initial text.

Upon hearing the February announcement of GPT-2, researchers warned about possible malicious applications of the programme including the generation of misleading news articles, creation of abusive fake content on social media, or even extremist propaganda on the web. What has changed now?

Since February the organisation has released more complex versions of the application and even the full version of GPT-2 can now be accessed by the public. Several tech wizards have already tried out the model to allow anyone to generate their textual opuses, including Adam King, who built a web interface called “Talk to Transformer” using the full GPT-2 version released by OpenAI.

​In its announcement about the full version this week, OpenAI noted that they found “no strong evidence” of the model’s “misuse” in producing a high-volume of coherent spam, but still noted that the system could be used in a malicious way through the generation of “synthetic propaganda” by a terrorist organisation. OpenAI also offered its own products to help detect GPT-2’s texts with a 95% accuracy. However, these products would still need to be accompanied by human judgement. The organisation also noted that the programme’s full release could potentially generate more discussions among experts and the general public about the possible misuse of text-generating tools.

“We are releasing this model to aid the study of research into the detection of synthetic text, although this does let adversaries with access better evade detection”, OpenAI stated.

​Nevertheless, researchers said that “synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent” with time. OpenAI also acknowledged that it was impossible for them to know of all the potential threats over releasing the full version.

To participate in the discussion
log in or register
Заголовок открываемого материала