Facial Recognition Expansion: Exploring Implications for Ethnic Minority Communities
Created Feb 17, 2023 - Last updated: Feb 17, 2023
This post was originally posted on Medium.
The Metropolitan Police is expanding its facial recognition capabilities, and this is bound to have enormous societal and cultural implications. There’s a cloud of concern looming over potential violations of civil liberties, especially for marginalized communities. To figure out where we stand on this tech and how to navigate its use responsibly, we need to open up a dialogue that’s honest, transparent, and grounded in evidence.
A diverse, inclusive and intersectional approach to AI should prioritize public trust and acknowledge those most susceptible to algorithmic bias. Understanding how different social groups perceive emerging technologies, particularly those with reasons to distrust current social control systems, is more crucial than ever.
My recent research project, which examines perceptions of algorithmic fairness and trust among various social groups and individuals with intersecting identities, serves as a starting point for this conversation. What is Automated Facial Recognition
Automated facial recognition is an artificially intelligent data-analysis technology which performs real-time biometric analysis of a person’s features to create a unique ‘map’ which can be cross-referenced with previously stored data to identify persons of interest. A.I technologies are often considered the next major advancement in crime-fighting tech; however, their use is controversial. Why the controversy?
Picture this: facial recognition cameras lurking in public spaces, silently collecting your biometric data. It’s a red flag for privacy violations and potential human rights breaches, not to mention the nagging issue of inaccuracies and biases within the technology that could exacerbate existing societal disparities.
These issues are indicative of what, Ruha Benjamin coined “the New Jim Code which is the “employment of new technologies that reflect and reproduce existing inequalities, but that are promoted and perceived as more objective and progressive than the discriminant discriminatory systems of a previous era”.
Interestingly, major players in tech like IBM, Microsoft, and Amazon have opted out of supplying facial recognition tech to law enforcement, citing ethical concerns. But while some step back, the Metropolitan Police is moving ahead with deployment, sparking calls for tighter regulations to govern its use. Why Should We Pay Attention?
Policing and surveillance play pivotal roles in societal control and can reinforce existing power differentials and inequitable justice outcomes. Introducing AI into these domains risks perpetuating discriminatory surveillance practices, particularly among historically marginalized groups already on the short end of the stick. While discussions on AI in policing are prevalent among tech experts and policymakers, the voices of the public often get drowned out. It’s important to ensure we bring everyone to the table, from tech gurus to community leaders to ensure that emerging technologies are wielded ethically and effectively. Peeking into People’s Minds
My recent study took a deep dive into how the public feels about facial recognition, unearthing a treasure trove of insights and concerns across different communities. Through open-ended surveys and careful analysis, we uncovered five key themes: distrust of law enforcement, worries about effectiveness, fears of racial bias, socioeconomic disparities, and privacy/data protection apprehensions.
It’s clear that not everyone sees AI decision-making through the same lens, especially those who’ve been marginalized by human biases in the past. That’s why we need research that actively seeks out and listens to diverse voices, capturing the full spectrum of experiences with AI.
It’s important to recognize that not everyone perceives AI decision-making in the same way, particularly if they face marginalisation at the hands of other humans. That is why research that purposely recruits and studies different social groups and different dimensions that can account for individual differences in experiences with AI is essential. In a Nutshell
As defenders tout the potential benefits of A.I. policing, it is equally important to acknowledge the potential for exploitation and abuse. This article advocates for a critical examination of idealistic views of A.I. and a comprehensive dialogue to determine the ethical boundaries of these technologies in society. It is more important than ever to challenge tech solutionism, interrogate idealistic views of AI and aim to build a society that values not just efficiency, but justice and fairness for all.