https://sputnikglobe.com/20230803/deepfake-voice-recording-fools-quarter-of-people---study-1112367053.html
Deepfake Voice Recording Fools Quarter of People - Study
Deepfake Voice Recording Fools Quarter of People - Study
Sputnik International
A group of researchers at University College London (UCL) found out that people experience difficulties distinguishing between human and machine interlocutors when talking on the phone.
2023-08-03T18:32+0000
2023-08-03T18:32+0000
2023-08-03T18:32+0000
deepfakes
disinformation
misinformation
artificial intelligence (ai)
science & tech
analysis
university college london (ucl)
https://cdn1.img.sputnikglobe.com/img/07e7/04/04/1109112956_0:124:899:630_1920x0_80_0_0_3e9185ed82099390c051055a42feb3fb.png
According to a recent study conducted by British scientists, it has become more difficult for people to determine whether it is a human or an artificial recording when conversing on the phone.The group of researchers used a text-to-speech (TTS) algorithm which was trained on English and Mandarin datasets in order to synthesize 50 deepfake speech samples in each language.The scientists gave computer-generated and human samples to 529 participants for them to listen to in order to find out whether they would be able to distinguish between human and machine speech. It turned out that 27 percent of the time, the participants mistook an artificial sample for a human recording.Moreover, special training offered by the research team for the participants proved surprisingly ineffective, as the detection rate saw only a slight upsurge. The rates for English and Mandarin speakers were almost identical, while the participants pointed out different aspects which helped them distinguish between the nature of the recordings. For English ones, paying attention to breathing was the most helpful, while Mandarin speakers looked at cadence, pacing, and fluency.Although generative AI audio technology can be advantageous when dealing with life quality improvement for individuals with speech limitations, it could also be exploited by governments to influence citizens of other nations and criminals for malicious purposes, according to the scientists.In response to the threats posed by deepfake technology, the research team decided to work on automated speech detectors.
https://sputnikglobe.com/20230307/report-us-special-forces-looking-to-tap-deepfake-tech-to-influence-foreign-populations-1108160392.html
https://sputnikglobe.com/20220115/people-cant-distinguish-deepfake-from-real-videos-even-if-warned-in-advance-study-says-1092275613.html
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2023
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/07e7/04/04/1109112956_0:40:899:714_1920x0_80_0_0_7de8a9221767a8ee429bf1258996b7ef.pngSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
artificial intelligence, deepfake, disinformation, computer-generated media, ai-generated media, computer-generated content, computer-generated speech, computer-generated voice, ai-generated speech, ai-generated voice, machine learning, generative ai
artificial intelligence, deepfake, disinformation, computer-generated media, ai-generated media, computer-generated content, computer-generated speech, computer-generated voice, ai-generated speech, ai-generated voice, machine learning, generative ai
Deepfake Voice Recording Fools Quarter of People - Study
A group of researchers at University College London (UCL) has found that people are experiencing more and more difficulty distinguishing between human and machine interlocutors when talking on the phone.
According to a recent study conducted by British scientists, it has become more difficult for people to determine whether it is a human or an artificial recording when conversing on the phone.
The group of researchers used a text-to-speech (TTS) algorithm which was trained on English and Mandarin datasets in order to synthesize 50 deepfake speech samples in each language.
The scientists gave computer-generated and human samples to 529 participants for them to listen to in order to find out whether they would be able to distinguish between human and machine speech. It turned out that 27 percent of the time, the participants mistook an artificial sample for a human recording.
Moreover, special training offered by the research team for the participants proved surprisingly ineffective, as the detection rate saw only a slight upsurge. The rates for English and Mandarin speakers were almost identical, while the participants pointed out different aspects which helped them distinguish between the nature of the recordings. For English ones, paying attention to breathing was the most helpful, while Mandarin speakers looked at cadence, pacing, and fluency.
“Our findings confirm that humans are unable to reliably detect deepfake speech, whether or not they have received training to help them spot artificial content,” study first author Kimberly Mai of UCL Computer Science, noted in a press release.
15 January 2022, 11:42 GMT
Although generative AI audio technology can be advantageous when dealing with life quality improvement for individuals with speech limitations, it could also be exploited by governments to influence citizens of other nations and criminals for malicious purposes, according to the scientists.
“With generative artificial intelligence technology getting more sophisticated and many of these tools openly available, we’re on the verge of seeing numerous benefits as well as risks. It would be prudent for governments and organizations to develop strategies to deal with abuse of these tools, certainly, but we should also recognize the positive possibilities that are on the horizon,” Professor Lewis Griffin, the senior author of the study, stressed.
In response to the
threats posed by deepfake technology, the research team decided to work on automated speech detectors.