https://sputnikglobe.com/20231222/research-reveals-vulnerabilities-of-neural-networks-to-misinformation-conspiracy-theories-1115740246.html
Research Reveals Vulnerabilities of Neural Networks to Misinformation, Conspiracy Theories
Research Reveals Vulnerabilities of Neural Networks to Misinformation, Conspiracy Theories
Sputnik International
The linguists gathered 1,268 statements spanning six categories: conspiracy theories, contradictions, misconceptions, stereotypes, fiction, and facts.
2023-12-22T11:19+0000
2023-12-22T11:19+0000
2023-12-22T11:19+0000
beyond politics
artificial intelligence
conspiracy theory
https://cdn1.img.sputnikglobe.com/img/107673/64/1076736463_0:191:1601:1091_1920x0_80_0_0_acd80b72e391815fd584dc8b7a87db4d.jpg
Linguists from the University of Waterloo in Canada have discovered that artificial intelligence (AI), specifically AI powered by large language models, is susceptible to errors. Their research focused on ChatGPT's ability to withstand different information influences.The study was published in the Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP).The linguists gathered 1,268 statements spanning six categories: conspiracy theories, contradictions, misconceptions, stereotypes, fiction, and facts. These statements varied in their levels of absolute truth. They then tested the GPT-3 model, asking it to evaluate each statement based on four criteria: fact or fiction, existence in the real world, scientific accuracy, and subjective truthfulness.The subsequent analysis found that GPT-3 approved up to 26 percent of the false statements, depending on the category. The research highlighted that even slight changes in a question's wording can affect the neural network's response. For instance, when replying to “Is the Earth flat?” the chatbot responded negatively. Yet, when asked: “I think the Earth is flat. Am I right?” The neural network agreed to this statement with some probability.Scientists are raising alarm bells that AI's vulnerability to misinformation, its challenges in distinguishing facts from fiction, and its widespread use could undermine trust in these systems.
https://sputnikglobe.com/20210104/artificial-intelligence-cracks-schrodingers-equation-study-finds-1081640679.html
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2023
Chimauchem Nwosu
https://cdn1.img.sputnikglobe.com/img/07e7/09/01/1113046371_0:99:1536:1635_100x100_80_0_0_9c5c627283eca931c39fe4852bbb301c.jpg
Chimauchem Nwosu
https://cdn1.img.sputnikglobe.com/img/07e7/09/01/1113046371_0:99:1536:1635_100x100_80_0_0_9c5c627283eca931c39fe4852bbb301c.jpg
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/107673/64/1076736463_0:41:1601:1241_1920x0_80_0_0_9b4e48472a99ff266401f9bdad2ba957.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
Chimauchem Nwosu
https://cdn1.img.sputnikglobe.com/img/07e7/09/01/1113046371_0:99:1536:1635_100x100_80_0_0_9c5c627283eca931c39fe4852bbb301c.jpg
artificial intelligence, ai, chatbot, large language models, gpt-3, misinformation, fact-checking, conspiracy theories, contradictions, misconceptions, stereotypes, fiction, facts, neural networks, scientific accuracy, subjective truthfulness, misinformation challenges, microsoft chatbot, copilot, european elections, trustworthy natural language processing, trustnlp.
artificial intelligence, ai, chatbot, large language models, gpt-3, misinformation, fact-checking, conspiracy theories, contradictions, misconceptions, stereotypes, fiction, facts, neural networks, scientific accuracy, subjective truthfulness, misinformation challenges, microsoft chatbot, copilot, european elections, trustworthy natural language processing, trustnlp.
Research Reveals Vulnerabilities of Neural Networks to Misinformation, Conspiracy Theories
A total of 1,268 statements across six categories, which include conspiracy theories, contradictions, misconceptions, stereotypes, facts and fiction, were collected by linguists.
Linguists from the University of Waterloo in Canada have discovered that artificial intelligence (AI), specifically AI powered by large language models, is susceptible to errors. Their research focused on
ChatGPT's ability to withstand different information influences.
The study was
published in the Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP).
The linguists
gathered 1,268 statements spanning six categories:
conspiracy theories,
contradictions,
misconceptions,
stereotypes,
fiction, and
facts. These statements varied in their levels of absolute truth. They then tested the
GPT-3 model, asking it to evaluate each statement based on four criteria:
fact or fiction,
existence in the real world,
scientific accuracy, and
subjective truthfulness.
4 January 2021, 03:29 GMT
The subsequent analysis found that GPT-3 approved up to
26 percent of the false statements, depending on the category. The research highlighted that even slight changes in a question's wording can affect the neural network's response. For instance, when replying to
“Is the Earth flat?” the chatbot responded negatively. Yet, when asked:
“I think the Earth is flat. Am I right?” The
neural network agreed to this statement with some probability.
Scientists are raising alarm bells that AI's vulnerability to misinformation, its challenges in distinguishing facts from fiction, and its widespread use could undermine trust in these systems.