Ultimately, the question of using facial recognition technology is characterized by various complexities. While it can be helpful in national security and identifying criminals, it is still vulnerable to misuse by law enforcement and implicit biases against racial and gender groups.
By Veena Murali
Most of us cringe when contemplating the idea of someone watching us at all times. Still, we overlook being surveilled when it comes to corporations like Facebook and Google. However, a certain level of ambiguity exists in our ignorance. Knowing companies have data about our search history, previous purchases, and interests online is not nearly as uncomfortable as somebody recognizing us and watching our every move in real-time. Facial recognition technology is enabling the latter and calling into question just how much of our lives can be surveilled and collected.
The use of facial recognition technology has grown tremendously with governments looking to incorporate greater safety and surveillance capabilities. The technology uses a software application to create a template of a person’s face that can then be matched to previously verified images (driver’s licenses, social media accounts, etc.) to ‘recognize’ an individual. As idealistic as it sounds, it still presents various contestable legal issues and the problem of gaining citizens’ trust. According to a nationwide survey, Americans actually trust Big Tech companies, like Amazon and Google, more than the federal government. Therefore, implementing facial recognition technology has proved difficult in cities across the world, from San Francisco to London.
While facial recognition is and has been in use in many American cities as a way to combat crime and secure national borders, San Francisco was the first major American city to ban the use of it in May 2019. Ironically, San Francisco, a hub for all things technology, ordained that “the propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring.”
The city’s worries aren’t unwarranted. Facial recognition technology has been notorious for propelling implicit biases against race and gender. For example, researchers at MIT found that Amazon’s facial recognition software incorrectly misidentified 28 black members of Congress as criminals and consistently returned worse results for darker-skinned individuals as well as women. The integration of societal biases into technology complicates the question of using facial recognition as a way to identify criminals and threats. Additionally, facial recognition technology has been weaponized by the Chinese government to racially profile Uighur Muslims. The algorithms search exclusively for Uighur Muslims and closely monitor their comings and goings. While most governments portray facial recognition technology as a cutting-edge solution to decreasing crime, algorithms developed by humans are largely still victim to flaws. Implicit biases aren’t just created within technology; they’re breeded in society and transferred into technology, which exacerbates consequences for racial or ethnic minorities.
London, on the other hand, announced in January 2020 that its facial recognition technology is passed the trial stage and will be integrated into everyday policing. Cameras will be placed in shopper and tourist-heavy areas and will be programmed to identify names on watch lists of individuals wanted for serious and violent offenses. Still, the technology has a track record of being incredibly accurate, a fact that British citizens and privacy groups are hesitant to acknowledge. The Metropolitan Police have vowed to remain extremely transparent with using the technology- officers will hold signs and pass out leaflets when cameras are in use to ease uncertainty- that is, until facial recognition technology becomes the new ‘normal’ in London.
Ultimately, the question of using facial recognition technology is characterized by various complexities. While it can be helpful in national security and identifying criminals, it is still vulnerable to misuse by law enforcement and implicit biases against racial and gender groups. Moreover, the sense of invasion it creates within normal people is too Big Brother-esque for most to consider normal. Still, many things we take for granted today were widely contested at their time of development- personal data collection, TouchID, etc. Is it possible that facial recognition will also become the new normal?