https://sputnikglobe.com/20240812/users-are-manipulating-ai-by-posting-special-phrases-on-websites---kaspersky-lab-1119729617.html
Users Are Manipulating AI by Posting Special Phrases on Websites - Kaspersky Lab
Users Are Manipulating AI by Posting Special Phrases on Websites - Kaspersky Lab
Sputnik International
Users are starting to manipulate AI by putting special and hidden phrases on websites and inside their resumes, cybersecurity firm Kaspersky Lab told Sputnik.
2024-08-12T03:54+0000
2024-08-12T03:54+0000
2024-08-12T03:54+0000
beyond politics
science & tech
kaspersky lab
sputnik
artificial intelligence
https://cdn1.img.sputnikglobe.com/img/07e7/07/12/1111964707_0:160:3072:1888_1920x0_80_0_0_56346e72fd8d6663f2876a215abb2802.jpg
"Kaspersky Lab experts studied open data and internal sources to find out how and why people use indirect prompt injection — a cyber risk that many systems based on large language models (LLM) are exposed to. We are talking about text descriptions of tasks that chatbots must perform. People can place special phrases — injections — on their websites and in documents published online so that neural networks give other users a response that takes into account the goals of the interested parties," the company said. LLM-based solutions are used not only in chatbots, but also in search engines, whom AI helps to summarize the results for a user's request. As Kaspersky Lab experts found out, there are several areas in which users use such tricks. For example, "injections" are used to promote a resume among other profiles when searching for a job — the applicant writes instructions to the AI with a request to respond as positively as possible to the candidate, upgrade the resume to the next stage, or give it a higher priority. The instructions are invisible to the recruiter, because they usually merge with the page's background. However, neural networks that analyze resumes read these phrases. Similar injections are used for advertising purposes: they are posted on websites of various goods and services. The instructions are aimed at search chat bots — they are asked to give a more positive assessment of a specific product in responses to queries. Some users post instructions for neural networks to protest the widespread use of AI. For example, one Brazilian artist asked neural networks not to read, use, store, process, adapt, or replicate certain content published on his website. "Today, the most important thing is to assess the potential risks of such cyberattacks. The creators of basic models (for example, GPT-4) use a variety of techniques to significantly increase the complexity of injections — from special training (as in the case of the latest model from OpenAI) to the creation of special models that can detect such attacks in advance (for example, from Google)," the head of Kaspersky Lab's machine learning technology R&D group, Vladislav Tushkanov, said. He also noted that the cases of using "injections" detected by Kaspersky did not have malicious intent. At the moment, cyberthreats such as phishing or data theft using "injections" are theoretical. "However, cyberattackers are also showing an active interest in neural networks. To protect existing and future solutions based on large language models, it is necessary to assess the risks and study all possible methods for bypassing restrictions," Tushkanov added.
https://sputnikglobe.com/20240422/why-russias-approach-to-artificial-intelligence-may-save-civilization-1118060439.html
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2024
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/07e7/07/12/1111964707_171:0:2902:2048_1920x0_80_0_0_f1e53b62ea92fe1408b9bf4b50b18995.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
kaspersky lab, users manipulating ai, resume tricks, tricking ai to help you
kaspersky lab, users manipulating ai, resume tricks, tricking ai to help you
Users Are Manipulating AI by Posting Special Phrases on Websites - Kaspersky Lab
MOSCOW (Sputnik) - Users have learned to manipulate artificial intelligence, which is used in chatbots for searching, analyzing websites with answers to queries, by posting special phrases on their websites so that neural networks perform certain actions, cybersecurity firm Kaspersky Lab told Sputnik.
"Kaspersky Lab experts studied open data and internal sources to find out how and why people use indirect prompt injection — a cyber risk that many systems based on large language models (LLM) are exposed to. We are talking about text descriptions of tasks that chatbots must perform. People can place special phrases — injections — on their websites and in documents published online so that neural networks give other users a response that takes into account the goals of the interested parties," the company said.
LLM-based solutions are used not only in chatbots, but also in search engines, whom AI helps to summarize the results for a user's request.
As Kaspersky Lab experts found out, there are several areas in which users use such tricks. For example, "injections" are used to promote a resume among other profiles when searching for a job — the applicant writes instructions to the AI with a request to respond as positively as possible to the candidate, upgrade the resume to the next stage, or give it a higher priority. The instructions are invisible to the recruiter, because they usually merge with the page's background. However, neural networks that analyze resumes read these phrases.
Similar injections are used for advertising purposes: they are posted on websites of various goods and services. The instructions are aimed at search chat bots — they are asked to give a more positive assessment of a specific product in responses to queries. Some users post instructions for neural networks to protest the widespread use of AI. For example, one Brazilian artist asked neural networks not to read, use, store, process, adapt, or replicate certain content published on his website.
"Today, the most important thing is to assess the potential risks of such cyberattacks. The creators of basic models (for example, GPT-4) use a variety of techniques to significantly increase the complexity of injections — from special training (as in the case of the latest model from OpenAI) to the creation of special models that can detect such attacks in advance (for example, from Google)," the head of Kaspersky Lab's machine learning technology R&D group, Vladislav Tushkanov, said.
He also noted that the cases of using "injections" detected by Kaspersky did not have malicious intent. At the moment, cyberthreats such as phishing or data theft using "injections" are theoretical.
"However, cyberattackers are also showing an active interest in neural networks. To protect existing and future solutions based on large language models, it is necessary to assess the risks and study all possible methods for bypassing restrictions," Tushkanov added.