‘Risky Technology’: US Police Use of New Facial Recognition App Prompts Privacy Concerns

US law enforcement agencies have started using a new and controversial facial recognition system created by startup Clearview AI, according to an investigative report by the New York Times.
Sputnik

Law enforcement officials have been using the app to take photos of people of interest. After the photo has been uploaded, the app searches for public photos of the person from millions of websites, including Facebook, YouTube, Twitter, Instagram and Venmo. The app also has the ability to provide links to where the retrieved images are hosted.

“The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew,” the Times reported Saturday.

The Times also reported that the app from Clearview AI - which was founded by Hoan Ton-That, a 31-year-old entrepreneur - has already been used by police to solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation crimes. In one example cited by the Times, Indiana State Police were able to solve a case using the app in 20 minutes.

However, there are concerns over the security of Clearview AI’s servers. Even though the company states that it does not look at photos that are uploaded to its servers, it seemed to be aware of and monitoring Kashmir Hill, the Times journalist investigating the company.

“While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for,” Hill wrote in the Times piece.

In addition, Hill noted that when she began investigating the company back in November, only one employee was listed on networking site LinkedIn: a sales manager named “John Good.” However, Hill said that she later learned that “John Good” was instead Ton-That’s alias, and that for a month, the company refused to return Hill’s emails or phone calls. 

Hill also later learned that the company was co-founded by Richard Schwartz, an aide to former New York Mayor Rudolph Giuliani, and received financial backing from Peter Thiel, a venture capitalist who has invested in Facebook and co-founded Palantir Technologies. Another investment service firm called Kirenaga Partners also appears to have invested in Clearview AI.

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Kirenaga Partners founder David Scalzo is quoted as saying by the Times. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

According to the company, its app successfully matches a photo to the right person up to 75% of the time. However, Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, stated that there is a lack of data on facial recognition algorithms.

“We have no data to suggest this tool is accurate. The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet,” Garvie told the Times.

According to the Times, Clearview continues to generate a large database of people who have caught the attention of law enforcement officials. The company also seems to have the ability to alter what results people see. When the company noticed that Hill was requesting officers to scan their photo through the app, Hill’s face was “flagged by Clearview’s systems and for awhile showed no matches.” 

“It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math,” Al Gidari, a privacy professor at Stanford Law School, told the Times. “Absent a very strong federal privacy law, we’re all screwed.”

Since Clearview AI only uses publicly available images, changing your privacy settings on your social media apps can prevent the app from including your photos in the database, Ton-That told the Times. However, if your images have already been uploaded to the app, the company holds onto them, even if users take them down, though Ton-That said the company was developing a tool that would allow users to request the removal of their images from Clearview AI servers after they delete them from the sites where they were initially hosted.

According to Woodrow Hartzog, a professor of law and computer science at Northeastern University in Boston, facial recognition should be banned in the US. 

“We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Hartzog told the Times. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Discuss