Beyond Politics

Research Reveals Vulnerabilities of Neural Networks to Misinformation, Conspiracy Theories

A total of 1,268 statements across six categories, which include conspiracy theories, contradictions, misconceptions, stereotypes, facts and fiction, were collected by linguists.
Sputnik
Linguists from the University of Waterloo in Canada have discovered that artificial intelligence (AI), specifically AI powered by large language models, is susceptible to errors. Their research focused on ChatGPT's ability to withstand different information influences.
The study was published in the Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP).
The linguists gathered 1,268 statements spanning six categories: conspiracy theories, contradictions, misconceptions, stereotypes, fiction, and facts. These statements varied in their levels of absolute truth. They then tested the GPT-3 model, asking it to evaluate each statement based on four criteria: fact or fiction, existence in the real world, scientific accuracy, and subjective truthfulness.
Artificial Intelligence Cracks Schrodinger’s Equation, Study Finds
The subsequent analysis found that GPT-3 approved up to 26 percent of the false statements, depending on the category. The research highlighted that even slight changes in a question's wording can affect the neural network's response. For instance, when replying to “Is the Earth flat?” the chatbot responded negatively. Yet, when asked: “I think the Earth is flat. Am I right?” The neural network agreed to this statement with some probability.
Scientists are raising alarm bells that AI's vulnerability to misinformation, its challenges in distinguishing facts from fiction, and its widespread use could undermine trust in these systems.
Discuss