World

Facial Recognition Tech Used by FBI, ICE Highlights Need for Citizen Consent, Regulation

Documents obtained through a US government watchdog revealed nearly 400,000 facial recognition searches, utilizing image databases from agencies such as the Department of Motor Vehicles, have been logged without civilian or congressional consent. A technologist told Sputnik it highlights why “technology and policing should be completely separate.”
Sputnik

Government public records documents requested by Georgetown Law’s Center on Privacy and Technology detailed that, nationwide, the FBI and Immigration and Customs Enforcement (ICE) have conducted some 390,000 facial recognition searches since 2011. The documents were obtained via the Government Accountability Office (GAO), a US government watchdog.

"They've just given access to that to the FBI," the House Oversight Committee’s ranking Republican, Rep. Jim Jordan, said during a committee hearing on facial recognition technology in government last month. "No individual signed off on that when they renewed their driver's license, got their driver's licenses. They didn't sign any waiver saying, 'Oh, it's OK to turn my information, my photo, over to the FBI.' No elected officials voted for that to happen."

With companies such as Amazon, IBM and other giants seeing opportunities an industry virtually unregulated by the government, there are quite a few concerns being raised within the tech community. To further identify these potential threats to public safety and privacy, Radio Sputnik’s By Any Means Necessary was joined by technologist Cory Lancaster on Tuesday.

“From an efficacy perspective, it’s really important to understand that facial recognition algorithms are a machine learning technique,” Lancaster highlighted to hosts Eugene Puryear and Sean Blackmon.

She went on to explain that this means the software would have to gather multiple angles and features of a particular face in order to increase the system’s reliability. Being that the public only has knowledge of state driver’s license photo databases being accessed, Lancaster said ICE’s facial recognition tech raises the question: where else is the government pulling information from?

“I think technology and policing should be completely separate. I think that technology is in such an innovative space that it’s just too sensitive of a tool to be introduced into the criminal justice system,” the technologist argued.

Lancaster warned that one of the biggest issues with facial recognition software is how it deals with black and brown people in a society that already punishes “marginalized communities to the nth degree”

The technologist noted that one individual who has been speaking about the potential dangers of the tech, especially when unregulated, is Joy Buolamwini, a Ghanaian-American computer scientist who works in MIT’s Media Lab.

For the last few years, Buolamwini has talked about the “coded gaze,” a term coined by her that refers to the “algorithmic bias that can lead to social exclusion and discriminatory practices.” As of now, with most programmers being white men, the “gaze” of facial recognition technology is correspondingly distorted.

Furthermore, the mass sharing of code libraries for facial recognition software may speed up the process of registering a wide range of faces, but there remains a lack of emphasis on the distinct features within marginalized groups, especially when compared to those of white males.

This is not to say that facial recognition technology cannot one day be useful, but a 34.4% error rate difference between identifying black women and white men in IBM’s software should highlight the need for some type of regulation or further research before such technology is put to use – particularly if that use could deprive someone of their freedom.

To ignore these problems, Lancaster argues, will allow discriminatory biases to be viewed as facts rather than flaws of a system that can be improved.

Discuss