When Identity Meets AI: The Intersection of Facial Recognition and Gender Diversity
Kaitlyn Seever

Introduction
Facial recognition technology has dramatically evolved since its inception in the 1960s into a pervasive tool used in security personal identification today. Initially developed to automate facial identification through rudimentary algorithms, the technology has become a staple in commercial, governmental, and personal contexts. However, with the advancements in, and the normalization of, facial recognition technologies have come evidence of biases in these algorithms’ foundations, particularly concerning gender identity. Research has shown that these technologies frequently misidentify transgender and gender-diverse individuals, with significant implications for their security. This investigation will delve into the history of facial recognition technology, its biases, and the detrimental effects these ingrained biases have on gender-diverse individuals. Ultimately, it will be demonstrated that with facial recognition technologies come significant security risks for transgender and gender-diverse individuals due to innate biases within the technologies’ algorithms.
History of Facial Recognition Technology
Early attempts at facial recognition technology aimed to grant machines a key human ability: the capacity to recognize faces.[1] Efforts to realize this goal can be traced back to the pioneering work of Woody Bledsoe, Helen Chan Wolf, and Charles Bisson in the 1960s. Between 1964 and 1965, these researchers began to work with computers to recognize the human face. To do so, they manually marked facial “landmarks” such as the centers of the eyes and the mouth, though they were constrained by the technology available at the time. The technology remained primitive during the following decade, though some notable improvements did occur. During the 1970s, facial recognition technology became more accurate. Extending Bledsoe’s work, researchers A.J. Goldstein, L.D. Harmon and A.B. Lesk included 21 specific subjective markers on the face, such as hair color and lip thickness, to improve the automation of facial recognition. Despite these improvements, the process still required manual computation of measurements and locations.[2]
Throughout the 1980s and 1990s, linear algebra played a significant role in advancing facial recognition technology. This shift began in 1988 when L. Sirovich and M. Kirby developed a system that came to be known as Eigenface. This system demonstrated that feature analysis on a collection of facial images could form a set of basic features. A breakthrough in automatic facial recognition occurred in 1991 when Matthew A. Turk and Alex P. Pentland built on Sirovich and Kirby’s work. They discovered a method to detect faces within images, marking a significant advancement in the field.[3]
The technology continued to evolve from the 1990s into the 2000s. In the early 1990s, the Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology (NIST) rolled out the Face Recognition Technology (FERET) program to encourage the commercial facial recognition market. Moreover, NIST introduced Face Recognition Vendor Tests (FRVT). These tests were created to provide independent government evaluations of both commercially available facial recognition systems and prototype technologies. The evaluations were designed to provide law enforcement agencies and the U.S. government with the information necessary to make informed decisions about the most effective ways to deploy facial recognition technology.[4]
In 2006, NIST launched the Face Recognition Grand Challenge (FRGC). This program aimed to promote and advance face recognition technology to support existing face recognition efforts in the U.S. Government. FRGC included an evaluation of the latest facial recognition algorithms using high-resolution facial images, 3D face scans, and iris images. The results revealed that the new algorithms were ten times more accurate than the face recognition algorithms of 2002 and 100 times more precise than those of 1995.[5]
From 2010 to today, facial recognition technology has increasingly permeated commercial, personal, and military domains. For example, in 2010, Facebook implemented facial recognition technology into its platform. The company intended to use this technology to automatically identify people in photos that users uploaded, meaning Facebook could suggest who might be in the photo without users having to tag them manually.[6] Furthermore, the U.S. military used facial recognition technology to confirm the identity of Osama bin Laden’s body following the U.S. assault that killed him in 2011.[7] By 2015, this technology was being introduced into personal devices as a security feature with Windows Hello and Android's Trusted Face.[8] In the years since, facial recognition technology has transformed into a marketing tool for corporations, as seen with Apple’s 2017 launch of the iPhone X. This iPhone was marketed as the first that could be unlocked using FaceID, Apple’s branded term for facial recognition.[9]
Bias in AI Systems
To understand the risks posed by artificial intelligence in facial recognition technologies, it is essential to first recognize how bias is introduced into AI systems. IBM defines AI bias, also known as machine learning bias, as “AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.”[10] There are three large sources of bias in these algorithms: training data bias, algorithmic bias, and cognitive bias.[11]
Training data forms the foundation for AI decision-making processes. If the training data contains over- or underrepresented groups, it can lead to prejudiced judgments. Flawed training data can, in turn, result in algorithmic bias, where the AI consistently produces errors, promotes unfair outcomes, or reinforces the bias present within this distorted data. The third major source of bias, cognitive bias, occurs when developers embed their own experiences, preferences, or assumptions into the system—through the selection of data or how that data is weighted.[12]
Jean Linis-Dinco compellingly states the danger posed by inadequate or biased training data: “AI is garbage in, garbage out, and if we feed it with training data that devalues trans people, we multiply that bigotry and turn it into an institution.”[13] This statement encapsulates the broader concern that facial recognition systems reflect and perpetuate existing societal biases. When these systems are not developed with inclusivity in mind, they have the potential to marginalize already vulnerable populations further. Consequently, these biases are not simply technical shortcomings; they are representative of a larger systemic failure to account for the diversity of human experiences, disproportionately harming gender-diverse individuals and reinforcing existing societal inequities.
Transgender Identities and Misidentification
It is well documented that human bias has infiltrated AI systems, resulting in practical consequences for women and people of color. For example, as mentioned in Xristina Zogopoulou’s paper, a recruiting algorithm used by Amazon was abandoned after it was determined that the algorithm favored men’s resumes over women’s. Specifically, it gave preference to applicants based on words like “executed” or “captured,” which were more commonly found on men’s resumes.[14] An algorithm involved in courtroom sentencing was shown to be less lenient toward Black individuals compared to white individuals, resulting in harsher sentences for Black individuals.[15]
Biases found in general artificial intelligence technologies extend to facial recognition technologies because of how the technology is trained. However, facial recognition technology faces the unique issue of misidentifying gender-nonconforming individuals. Research from the University of Colorado Boulder reveals that the most popular facial recognition technologies on the market fail to recognize the gender of transgender and gender-nonconforming individuals accurately.[16] This project tested facial recognition systems from tech giants IBM, Amazon, Microsoft, and Clarifi on photographs of transgender men. On average, the researchers found that these systems correctly identified cisgender men about 97 percent of the time and cisgender women about 98 percent of the time.[17] On the other hand, they found that transgender men were misidentified as women 38 percent of the time. Individuals who did not identify as male or female—such as those who are non-binary, agender, or genderqueer—were misidentified 100 percent of the time.
More research on the limitations of facial recognition was conducted by Os Keyes, a PhD student at the University of Washington’s Department of Human-Centred Design & Engineering. They explored the ubiquity of automatic gender recognition (AGR) by delving into the past 30 years of facial recognition research. After studying 58 separate research papers and examining how the authors handled gender, they determined that these researchers followed a binary model of gender more than 90 percent of the time. Additionally, 70 percent of the time gender was viewed as immutable and research focused specifically on gender viewed it as a purely physiological construct more than 80 percent of the time.[18] These results demonstrate that even the most advanced facial recognition technologies fail to interpret gender beyond the binary, underscoring a critical limitation in their design. Technology has not evolved at the same rate as current understanding of gender as a construct.[19] As such, there is a massive gap in the literature surrounding facial recognition technology and gender-diverse individuals, resulting in the erasure of gender minorities in these technologies.
​
​
​
​
​
​
​
​
​
​
​Industry Responses and Ongoing Gaps in Addressing Bias in Facial Recognition
The tech industry is aware of these structural biases, and some companies are taking steps to address them. IBM has implemented “debiasing toolkits” to scan for biases in AI systems, including examining the data on which these systems are trained, to make algorithms more fair.[20] Other companies are simply warning users about the limitations of their technologies. Amazon’s Rekognition product includes detailed use guidelines, stating that its predictions about gender are limited to the binary and based on a face’s physical appearance. Amazon’s guidelines specifically note that the product should not be used to determine an individual’s gender identity.[21] Similarly, Clarifi stated that it chose to use “masculine” and “feminine” as the descriptive terms for the reasons that gender terms are “an aspect of self and not something we felt our AI could appropriately label.”[22]
Despite these justifications and qualifiers, acknowledgment of the risks these technologies pose to trans and gender-nonconforming individuals remains missing. As Jean Linis-Dinco notes in her piece, “Machines, Artificial Intelligence and Rising Global Transphobia,” the long-term security ramifications must not be overlooked:
There is no argument that misgendering is disrespectful, but above all, it perpetuates the system of oppression that relegates those who do not fit into gender binaries to a subclass of human existence. And for members of the trans community who have been historically maligned and marginalised, pushing them back into the shadows invalidates their personhood. Lack of disaggregated data means that inequalities faced by transgender individuals will remain indiscernible, and this could have a tremendous effect on decision-making processes that aid in the realization, protection and fulfillment of their human rights.[23]
Airport Security and Facial Recognition Gender-Diverse Individuals
The tendency of facial recognition technology to misidentify gender-diverse individuals does not exist in a vacuum––it has real-world security implications. Misidentification can be actively harmful to transgender and gender-nonconforming individuals. Because updating the gender identity on legal documents is complicated and expensive, not every transgender and gender non-conforming person changes their documentation. In fact, according to a 2015 study, only 11 percent of respondents had successfully updated all their identity documents with their preferred name and gender.[24] This situation may result in trouble in the airport, for example, where transgender individuals can be subjected to invasive body searches if their identification does not match their gender identity.[25]
Notably, between 2016 and 2019, transgender and nonbinary travelers filed 5 percent of all complaints about mistreatment at the hands of TSA agents, despite only making up an estimated 1 percent of the population.[26] In 2023, the TSA announced plans to implement an artificial intelligence-driven, “gender-neutral” screening system, claiming that travel would be easier for gender-diverse individuals. However, this shift was also accompanied by the introduction of another controversial technology at airports—biometric facial matching technology to scan passengers in more than 200 airports in the U.S., including all with international departures.[27]
Thus, inaccurate facial recognition technology is associated with strong risks for gender-diverse individuals. Not only can it force them into a position of insecurity in travel-related situations, but mass utilization of this technology can also increase their chances of being forcibly outed, both violating their privacy and putting them in an insecure position.[28]
Uber’s Facial Recognition and Transgender Drivers
The ramifications of facial recognition technology extend beyond physical security concerns, including airport security screenings, to professional and financial security. By influencing someone’s access to employment, facial recognition technology has the potential to determine an individual’s ability to secure a job.
In 2016, Uber rolled out its Real-Time ID Check security feature to “protect both riders and drivers.” This feature would occasionally prompt drivers to pull over and take a selfie, which was then compared to the drivers’ photo on file using technology from Microsoft Cognitive Services. If the platform determined the photos were not a match, then the driver’s account would be temporarily suspended while Uber looked into the situation. This technology was meant to ensure that drivers were being honest about their identity for the safety of riders. However, the facial recognition system used in this verification process misidentified drivers undergoing the process of gender transitions. Consequently, some transgender drivers faced account suspensions. To resolve these issues and regain access to their accounts, some affected drivers were required to visit in-person help centers. This process was often both time-consuming and disruptive, significantly impacting their ability to work. For many drivers, this issue created barriers to generating a consistent income.[29]
This case demonstrates the structural discrimination faced by gender-diverse individuals under this facial recognition technology. Ultimately, this case demonstrates how facial recognition technology, even when applied to something as simple as a selfie security check, can threaten the livelihoods of gender minorities.
Broader Implications
The real-world examples of airport security screenings and Uber's facial recognition application reflect the larger threats posed by facial recognition technologies for gender-diverse individuals. As these technologies become increasingly ubiquitous in public spaces and commercial applications, the risks faced by gender-diverse individuals will likely only continue to intensify. The proliferation of facial recognition technologies can lead to an increase in mass surveillance. For gender minorities, this extends beyond an invasion of privacy—it signifies a heightened risk of being outed without consent. This forced outing opens the door for not only being a target of discrimination but could have physically and legally dangerous repercussions, particularly in regions of the world where gender diversity is criminalized or stigmatized. The potential for state or corporate entities to access personal information about individuals' gender identity can lead to a loss of autonomy, discrimination, and even persecution.
On a societal level, the normalization of a binary gender system within facial recognition technology has broader implications. Most current facial recognition algorithms are built on the normalized assumption of the gender binary—male and female. Not only does this binary framework bring with it the risk of reinforcing stereotypes of what an individual “should” look like if they want to be recognized as a man or a woman, which impacts everyone,[30] it simultaneously inherently devalues nonbinary, agender, and other gender-diverse identities. As these algorithms are trained on limited data, they can reinforce the widespread social view that gender exists solely as a binary construct. With time, this could create a cycle where society's understanding of gender becomes more entrenched, leading to further systemic discrimination directed at gender-diverse individuals. If the technology continues to be adopted without proper safeguards, it could contribute to an unsafe environment for gender minorities.
Conclusion
While facial recognition technology has made remarkable strides in accuracy and application, its inherent biases pose significant risks, particularly for transgender and gender-nonconforming individuals. The evolution of this technology, combined with its current limitations, highlights the urgent need for a more inclusive approach to its development and implementation. The misidentification of gender-diverse individuals by facial recognition technology can result in harmful consequences, including persecution, privacy violations, and security risks. As these technologies continue to permeate all facets of daily life, it is essential that ethical considerations are prioritized and solutions continue to be sought to mitigate the harmful effects of bias.
Endnotes
[1] Shaun Raviv, “The Secret History of Facial Recognition,” Wired, January 21, 2020, https://www.wired.com/story/secret-history-facial-recognition/.
[2] “A Brief History of Facial Recognition - NEC New Zealand,” NEC, May 12, 2022, https://www.nec.co.nz/market-leadership/publications-media/a-brief-history-of-facial-recognition/#:~:text=The%20dawn%20of%20Facial%20Recognition,Recognition%20was%20a%20viable%20biometric.
[3] NEC, A Brief History of Facial Recognition.
[4] NEC, A Brief History of Facial Recognition.
[5] NEC, A Brief History of Facial Recognition.
[6] NEC, A Brief History of Facial Recognition.
[7] Reuters, “U.S. Tests Bin Laden’s DNA, Used Facial ID: Official | Reuters,” Reuters, May 1, 2011, https://www.reuters.com/article/us-binladen-dna-idUSTRE7411HJ20110502/.
[8] Sullivan, “Facial Recognition Technology.”
[9] NEC, A Brief History of Facial Recognition.
[10] “Ai Bias Examples,” IBM, October 16, 2023, https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples.
[11] IBM, “Ai Bias Examples.”
[12] IBM, “Ai Bias Examples.”
[13] Jean Linis-Dinco, “Machines, Artificial Intelligence and Rising Global Transphobia,” Melbourne Law School, March 1, 2021, https://law.unimelb.edu.au/news/caide/machines,-artificial-intelligence-and-the-rising-global-transphobia.
[14] IBM, “Ai Bias Examples.”
[15] Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[16] JLinis-Dinco, “Machines, Artificial Intelligence and Rising Global Transphobia.”
[17] Jesse Damiani, “New Research Reveals Facial Recognition Software Misclassifies Transgender, Non-Binary People,” Forbes, October 29, 2019, https://www.forbes.com/sites/jessedamiani/2019/10/29/new-research-reveals-facial-recognition-software-misclassifies-transgender-non-binary-people/.
[18] Matthew Gault, “Facial Recognition Software Regularly Misgenders Trans People,” VICE, February 19, 2019, https://www.vice.com/en/article/facial-recognition-software-regularly-misgenders-trans-people/.
[19] Eduardo Salazar Uribe, “Ai Boom Poses Threat to Trans Community, Experts Warn,” New York City News Service, April 9, 2024, https://www.nycitynewsservice.com/2024/04/09/ai-threat-transgender-nonbinary-people/.
[20] Samuel, “Some AI Just Shouldn’t Exist.”
[21] Edinger, “Facial Recognition Creates Risks for Trans Individuals, Others.”
[22] Edinger, “Facial Recognition Creates Risks for Trans Individuals, Others.”
[23] Linis-Dinco, “Machines, Artificial Intelligence and Rising Global Transphobia.”
[24] Julia Edinger, “Facial Recognition Creates Risks for Trans Individuals, Others,” GovTech, July 16, 2021, https://www.govtech.com/products/facial-recognition-creates-risks-for-trans-individuals-others.
[25] Millar, “Facial Recognition Technology Struggles to See Past Gender Binary.”
[26] Salazar Uribe, “AI Boom Poses Threat.”
[27] Salazar Uribe, “AI Boom Poses Threat.”
[28] Edinger, “Facial Recognition Creates Risks for Trans Individuals, Others.
[29] Jaden Urbi, “Some Transgender Drivers Are Being Kicked off Uber’s App,” CNBC, August 8, 2018, https://www.cnbc.com/2018/08/08/transgender-uber-driver-suspended-tech-oversight-facial-recognition.html.
[30] Salazar Uribe, “AI Boom Poses Threat.”
​
Bibliography
“Ai Bias Examples.” IBM, October 16, 2023. https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples.
Angwin, Julia, Jeff Larson, Lauren Kirchner, and Surya Mattu. “Machine Bias.” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
“A Brief History of Facial Recognition - NEC New Zealand.” NEC, May 12, 2022. https://www.nec.co.nz/market-leadership/publications-media/a-brief-history-of-facial-recognition/#:~:text=The%20dawn%20of%20Facial%20Recognition,Recognition%20was%20a%20viable%20biometric.
Damiani, Jesse. “New Research Reveals Facial Recognition Software Misclassifies Transgender, Non-Binary People.” Forbes, October 29, 2019. https://www.forbes.com/sites/jessedamiani/2019/10/29/new-research-reveals-facial-recognition-software-misclassifies-transgender-non-binary-people/.
Edinger, Julia. “Facial Recognition Creates Risks for Trans Individuals, Others.” GovTech, July 16, 2021. https://www.govtech.com/products/facial-recognition-creates-risks-for-trans-individuals-others.
Gault, Matthew. “Facial Recognition Software Regularly Misgenders Trans People.” VICE, February 19, 2019. https://www.vice.com/en/article/facial-recognition-software-regularly-misgenders-trans-people/.
Linis-Dinco, Jean. “Machines, Artificial Intelligence and Rising Global Transphobia.” Melbourne Law School, March 1, 2021. https://law.unimelb.edu.au/news/caide/machines,-artificial-intelligence-and-the-rising-global-transphobia.
Millar, Molly. “Facial Recognition Technology Struggles to See Past Gender Binary | Reuters.” Reuters, October 30, 2019. https://www.reuters.com/article/world/facial-recognition-technology-struggles-to-see-past-gender-binary-idUSKBN1X92OC/.
Raviv, Shaun. “The Secret History of Facial Recognition.” Wired, January 21, 2020. https://www.wired.com/story/secret-history-facial-recognition/.
Reuters. “U.S. Tests Bin Laden’s DNA, Used Facial ID: Official | Reuters.” Reuters, May 1, 2011. https://www.reuters.com/article/us-binladen-dna-idUSTRE7411HJ20110502/.
Samuel, Sigal. “Some AI Just Shouldn’t Exist.” Vox, April 19, 2019. https://www.vox.com/future-perfect/2019/4/19/18412674/ai-bias-facial-recognition-black-gay-transgender.
Sullivan, Erin. “Facial Recognition Technology.” Montana Legislature Archive, September 2021. https://archive.legmt.gov/content/Committees/Interim/2021-2022/Economic%20Affairs/Studies/HJR-48/facial-recognition-technology.pdf.
Urbi, Jaden. “Some Transgender Drivers Are Being Kicked off Uber’s App.” CNBC, August 8, 2018. https://www.cnbc.com/2018/08/08/transgender-uber-driver-suspended-tech-oversight-facial-recognition.html.
Urbi, Jaden. “Some Transgender Drivers Are Being Kicked off Uber’s App.” CNBC, August 8, 2018. https://www.cnbc.com/2018/08/08/transgender-uber-driver-suspended-tech-oversight-facial-recognition.html.
Uribe, Eduardo Salazar. “Ai Boom Poses Threat to Trans Community, Experts Warn.” New York City News Service, April 9, 2024. https://www.nycitynewsservice.com/2024/04/09/ai-threat-transgender-nonbinary-people/.

