Beyond Politics

AI Can 'Turn Stalker,' 'Targeting Victims' Using Facial Recognition Tools, Experts Warn

Amid growing concerns about the recent strides made by artificial intelligence technology, software engineer and DeepAI founder Kevin Baragona was one of those who joined the likes of Elon Musk and Apple co-founder Steve Wozniak in signing an open letter calling for a pause in AI development.
Sputnik
Artificial intelligence (AI) can use facial recognition technology for sinister purposes, such as “stalking” people, Kevin Baragona, founder of DeepAI.org, is cited as warning.
All it would take is one’s photo and the currently existing advanced AI-powered software, Baragona said.
"There are services online that can use a photo of you... and I can find everything... Every instance of your face on the internet, every place you've been, and use that for stalker-type purposes," said the CEO of the online platform that uses deep learning algorithms to generate artwork.
Baragona, who launched DeepAI in 2016 as an AI news portal, has witnessed his brainchild evolve into a deep learning platform, complete with AI chatbots and a text-to-image AI art generator. However, he has since been a vocal critic of the unbridled AI race, saying, "Someone should stop the AI industry."
"... If you run into someone in public, and you're able to get a photo of them, you might be able to find their name using online services. And if you pay enough, you might be able to find where they've been, where they might currently be and even predict where they'll go," Kevin Baragona continued.
According to Baragona, one of the threats that ought to be addressed is the possibility of the United States government and law enforcement using AI "in secrecy." The fears that the technology could offer access to people's online activity and real-life whereabouts echo a recent warning voiced by C.A. Goldberg, an American law firm.

"AI could enable offenders to track and monitor their victims with greater ease and precision than ever before," the New York-based company specializing in AI-related crimes wrote in a blog post on its website.

It added that advanced facial recognition technology is extremely effective at identifying individuals from images or videos. Even in the case of low quality images, such as those gleaned from surveillance cameras, or other online sources, could allow potential stalkers to track victims.
Beyond Politics
'Stakes Couldn’t Be Higher': Unbridled Super-Intelligent AI Could 'Wipe Out' Humanity
The warnings about AI-powered facial recognition software come as a company called PimEyes in the UK has been facing a legal wrangle. Its online face search engine is tailored to scour the Internet for websites based on photos submitted by users. While the company insists that the "risks of abusing PimEyes’ services are reduced to possible minimum,” and that databases are “secured in accordance with the highest standards of data security,” some are not convinced.
UK civil liberties campaign group Big Brother Watch said it poses a "great threat to privacy of millions of UK residents," according to a legal complaint.

"Images of anyone, including children, can be scoured and tracked across the internet," Madeleine Stone, a legal and policy officer at the campaigning organization was cited as saying.

In a response statement, PimEyes underwscored, it "had never been and is not a tool to establish the identity or details of any individual."
Beyond Politics
Scientist Warns of Looming 'Existential Threat' as Hyper-Intelligent AI 'Could Decide to Take Over'
But this is just one of many concerns that have turned from a trickle into a powerful chorus of voices arguing the dangers of AI developing too fast and becoming "smarter" than humans.
Earlier in the year, Baragona, like Tesla and SpaceX CEO Elon Musk, put his signature under an open letter calling for a six-month pause in the development of AI systems that are more advanced than GPT-4. According to the document, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
"I wasn't worried about AI at all until a year ago when we started seeing exponential progress... Now I think AI is pretty scary. This is what keeps me up at night. What are we building here? Why do we need this stuff? It's really fun, it's cool, people love it. but it's almost too good, it's too disruptive," Kevin Baragona stated in an interview earlier in the month.
Discuss