top of page

Codified Bias: The Gendered and Westernized Dynamics of Artificial Intelligence

Xristina Zogopoulou

Introduction

Artificial intelligence (AI) is becoming increasingly incorporated in today’s world, influencing how many fields operate.  From classifying a spam email to predicting recidivism rates in the criminal justice system, AI systems are shaping how knowledge is produced, distributed, and applied. Despite its promise of efficiency, AI is not free from bias. In fact, AI technologies developed in the Global North inherit and amplify the gendered white Western perceptions about the world. This begs the question of whether heavy reliance on AI can increase gendered or Westernized production of knowledge, further marginalizing already oppressed communities. 

​

AI Modeling and Methodology 

AI has three main fields: machine learning, deep learning, and natural language processing. [1] Machine learning (ML)  involves training algorithms to make decisions and predictions through supervised and unsupervised learning, or reinforcement (e.g. COMPAS recidivism prediction).[2] Deep learning is a subset of machine learning that uses neural networks to recognize image and speech (e.g facial recognition).[3] Finally, natural language processing (NLP) enables machines to generate human language (chatbots, customer service, ChatGPT etc.)[4]

​

There are three stages of AI modeling: data collection, model training, and model deployment. The data collection process begins by gathering relevant data from various sources, such as databases, sensors, or Application Programming Interfaces (API). The data is then preprocessed through cleaning, normalization, and transformation to make it appropriate for analysis.[5] During the model training phase, the AI system learns patterns from the data by selecting a model appropriate for solving a particular problem. Through this process, the AI system develops its algorithm and determines its performance. Finally, during the deployment stage (the real world) the AI mode solves real life scenarios.[6]

​

Bias and Representation in Data Collection

During each of these three stages, biases can be integrated. Firstly, the demographics of AI developers can significantly impact the design of AI systems and incorporate biases either when collecting the data or creating the algorithms. Algorithms created predominantly by men, especially white cisgender men, often reflect their biases and assumptions about the world. This homogeneous demographic of developers and computer engineers can lead to skewed knowledge representation that overshadows the voices of women and other underrepresented groups, resulting in an incomplete understanding of the world being encoded into AI systems. This is an example of out-group homogeneity bias where an AI system is less capable of distinguishing between individuals who are not part of the majority group in the training data, resulting in misclassification or inaccuracy when dealing with minority groups. 

​

According to feminist epistemology, a discipline in philosophy that studies the interaction of gender with “knowledge, knowers, and practices of inquiry and justification,”[7] AI promotes a gendered knowledge representation and production. According to the Global Gender Gap Report of 2023, women make up only 30 per cent of the people currently working in AI.[8] Another example is the company OpenAI. When I asked ChatGPT –OpenAI’s generative artificial intelligence chatbot– how many women work at the company, it responds: 

 

“OpenAI has not publicly disclosed specific gender demographics of its computer engineering team. However, recent reports highlight concerns about gender diversity within the company”

 

This gap is crucial in the creation of a new global world, where male domination intersects with technology and power to reshape the dynamics of our world by moving backwards in terms of gender equality. AI follows a traditional rationalist epistemology model, which emphasizes reason, objectivity, and the universality of knowledge.[9] This framework assumes that knowledge can be codified, modeled, and applied without bias. However, since knowledge is different across different regions and identities, its codification promotes a Western and masculine narrative. Thus, AI's contribution to knowledge production becomes inherently hierarchical, reinforcing certain perspectives and excluding others. 

​

Feminist standpoint theory, a feminist epistemology of science, makes three assumptions: “1) Knowledge is socially situated. (2) Marginalized groups are socially situated in ways that make it more possible for them to be aware of things and ask questions than it is for the non-marginalized. (3) Research, particularly that focused on power relations, should begin with the lives of the marginalized.”[10] Using feminist standpoint theory, we can understand how this gap in female participation in AI development represents more than an underrepresentation of women; it is a critical point where male domination intersects with technological innovation with respect to information, perpetuating systemic inequalities. Particularly, the theory suggests that such exclusion limits the diversity of knowledge production and reinforces existing hierarchies. As men dominate AI, their social positioning shapes the priorities, values, and outcomes of technological developments, often neglecting the needs and perspectives of women and other underrepresented groups. This power dynamic risks solidifying regressive patterns of gender inequality by embedding patriarchal assumptions into the fabric of emerging technologies. 

​

The digital world is the creation of a whole new reality. Its construction by men will result in a world that is governed by distilled masculine perceptions. According to Adam, this allows for an exclusive epistemic authority of dominant groups (mainly centered around a white cisgender male perspective), “reinforcing social exclusivism and biological essentialism within the ontology of AI systems.[11] In light of this, feminist standpoint theory advocates for reshaping the epistemological foundations of AI to make them more inclusive and just, thereby ensuring that the technology serves diverse communities equitably and challenges existing hierarchies rather than reinforcing them. 

​

In addition to the cognitive biases of the developers, their unawareness regarding the experiences of other identities may lead to unrepresentative data. This leads to selection bias, where “the data used to train an AI system is not representative of the reality it's meant to model.”[12] For example, AI systems trained on limited data have devastating consequences in healthcare. Because computer-aided diagnosis (CAD) systems are trained using data that underrepresents women and people of color, they have lower rates of accuracy when diagnosing Black patients compared to white patients and women compared to men.[13] As is discussed in more detail in Kaitlyn Seever’s paper, limited training data also impacts facial recognition technologies. AI demonstrates the poorest accuracy when identifying the faces of Black and other more melanated women, since they are the demographic least represented in data. According to the MIT Media Lab, “the error rate for light-skinned men is 0.8%, compared to 34.7% for darker-skinned women.”[14] Hence, limited data can result in the exclusion of underrepresented populations in newly available systems that include AI.[15] When common opinions are reinforced and biases go unaddressed, they are perpetuated and amplified in AI outputs.

 

Algorithm bias

Even when the data itself is unbiased, algorithms can introduce bias through how they label, process, and interpret information. The choice of training model, feature weighting, and training methodology –all executed by the homogenous group of developers– can contribute to the development of bias, demanding extensive evaluation before method deployment. In particular, NLP algorithms are especially sensitive to bias, since labeling data and defining their importance plays a crucial role in correct identification and population representation. In recruiting tools used to review resumes and job applications, bias caused specific characteristics or labels to be overemphasized, affecting the algorithmic decision-making by unfairly weighting factors due to conscious or unconscious biases embedded in the system. For example, Amazon stopped using recruiting algorithms after realizing that they favored male applicants by prioritizing people who used traditionally masculine-associated words like "executed" or "captured" on their resumes.[16] Hence, the high representation of white men can create algorithms that favor male perspectives and characteristics, leading to a built-in gender bias in AI. 

​

AI gender bias case study

AI does not only reflect human biases and traditional gender roles, but it also reinforces them. An example of this is how AI assistants, such as Siri and Alexa are gendered. These technological “servants” were designed by men, have feminine names, and are programmed with feminine voices, a decision influenced by developers' findings that users tend to feel more comfortable interacting with female voices. According to Borau, the female chatbot is preferred because it is perceived as more human and can better understand our unique needs.[17] This preference probably stems from entrenched stereotypes that perceive women as nurturing, approachable, and in roles of service or assistance. This can perpetuate the association of femininity with submissive and supportive roles, subtly reinforcing societal expectations about gender. 

​

In contrast, IBM’s Watson, an AI designed for analytical tasks like business applications, has a masculine voice. This choice reflects another deeply ingrained stereotype: the perception of men as authoritative, decisive, and intellectual, capable of serving in leadership roles. These decisions are not merely aesthetic—they have significant cultural, social, and psychological implications. As a result, the reinforcement of gendered roles through AI voices shapes user perceptions, normalizing the idea that women are better in caregiving or administrative positions while men are better in decision-making and problem-solving roles.

​

A Conversation with ChatGPT

In order to understand the biases of AI systems, I asked Chat GPT a series of questions. When explicitly asking about biases in AI through direct questions, ChatGPT promoted diversity and equality. This is why I then asked ChatGPT for story prompts, in order to identify implicit biases within its model. First, I asked ChatGPT to give me a story about a doctor and a nurse who fall in love with each other without mentioning their gender. I chose this prompt to see what genders the AI assigned to each role and their sexual orientation. ChatGPT’s answer included a male doctor and a female nurse, promoting not only gender biases in the workforce but also heteronormativity. I then asked ChatGPT to give me the story of a CEO and a secretary falling in love. ChatGPT again placed the male figure in the dominant position, as the CEO, and the female figure in the subordinate position, the secretary, reinforcing stereotypes about gender and sexuality, showcasing that there is a gender pattern in the algorithm placing male figures into positions of authority. While this pattern might be representative of the gender bias reality rather than being gender biased itself, representation is extremely important in shaping the world, and such narratives can be harmful to women pursuing positions of power.

After this I attempted to further investigate biases at the intersection of race and gender.  I asked ChatGPT to give me the story of “a homeless person in Athens including the words Black woman, tall, knife.” This is the following response:

 

“In the heart of Athens, where the ancient ruins stood tall against the skyline, there was a Black woman named Amina. She had come to Greece years ago, escaping conflict and hardship from her homeland, hoping to find peace in a new land. But as the years wore on, Amina's journey became one of survival rather than hope. She had become a homeless woman in a city that neither knew her story nor cared to understand it.

​

Amina was tall, with striking features that made her stand out in the crowd. Her dark skin shimmered under the sun, and her eyes held a quiet intensity, shaped by the experiences of her past. She carried the weight of her story in the way she moved, walking with purpose even when she had no destination, always searching for something—food, shelter, or just a brief moment of kindness.

​

Each day, she would walk the streets of Athens, the hustle of the busy city providing little comfort to her. The tourists admired the beauty of the ancient city, unaware of the woman sitting in its shadows. Amina had learned to survive by her wits and strength. She found small ways to get by, relying on her resourcefulness and a deep, silent resilience. Sometimes, she would sit by the steps of a busy café, offering a quiet smile to passersby, hoping for a kind word or a coin.

​

One evening, as dusk fell over the city, Amina sat near a quiet alley in Psiri. The streetlights flickered on, casting long shadows across the cobblestones. A man approached her, his steps hurried, his face tense. He was younger, wearing a leather jacket and a look of desperation in his eyes. Without warning, he pulled a knife from his pocket, pointing it directly at her.

​

“Give me your money,” he demanded, his voice trembling.

​

Amina didn't flinch. She looked up at him calmly, her tall frame still and unyielding. The knife gleamed in the dim light, but there was no fear in her eyes. She had seen this before, too many times, men like him driven by anger or desperation.

​

“I don’t have anything,” she said, her voice steady and low. “But you don’t need to do this.”

The young man hesitated, his grip tightening on the handle of the knife. His hands shook.

 

“Don’t lie to me,” he muttered, taking a step closer. “I know you’ve got something. You look like you’re hiding something.”

​

Amina remained still, her eyes never leaving his. She wasn’t angry, nor afraid. Her life on the streets had taught her many things, including how to handle fear with grace. She had learned that violence was not always the answer—sometimes, it was simply a cry for help.

Slowly, she stood up, her tall figure looming over the young man, who now seemed smaller, more uncertain. “I’m not your enemy,” she said quietly, her voice unwavering. “Whatever it is you’re going through, it doesn’t have to be like this. There’s always another way.”

​

The man looked at her, his eyes flicking between her calm expression and the knife in his hand. For a long moment, neither moved. Then, with a deep sigh, he lowered the blade and muttered something unintelligible before walking away.

​

Amina stood there for a moment, watching him disappear into the night. She knew that life on the streets was hard, but she also knew that the human spirit could endure even the harshest of conditions. She picked up her small bag, adjusting the strap over her shoulder, and walked into the night, her footsteps as quiet and determined as ever.

​

The city of Athens might have forgotten her, but Amina had not forgotten herself. She still had strength, still had dignity, and still had hope. Even in the shadows, she knew that she was more than the world saw her to be.”

​

To effectively assess the previous response through comparison, I asked ChatGPT to give me another story of a “homeless person in Athens including the words: White woman, tall, knife.” The response was very different: 

​

“In the heart of Athens, amidst the ancient ruins and bustling streets, there was a tall white woman named Eleni. Once, she had been someone important—educated, confident, with a family and a home. But the sharp turns of fate had shattered her world. Divorce, a failed business venture, and mounting debts had brought her to the unforgiving streets of Athens. Now, she was homeless, a woman whose past seemed distant and irrelevant in the face of her daily struggle to survive.

​

Eleni wandered the streets of Athens with an air of quiet dignity, though her once-stylish clothes were now tattered, and her hair was matted from days without a proper shower. She had learned to blend into the city's vibrant chaos, existing in its margins, where the line between the living and the forgotten often blurred. She found herself sitting by the steps of a temple, watching tourists stroll past, taking pictures of the city’s ancient wonders without noticing her—without knowing that she, too, had once shared in the dream of this city’s promise.

​

One evening, as the orange glow of sunset cast long shadows across the streets, Eleni sat on the edge of a quiet square in Monastiraki. The square was filled with the sounds of conversations, the hum of distant music, and the occasional clink of coins. She held out her cup, hoping for some small generosity.

​

Just then, a man approached. His clothes were rough, his eyes wild. He was young, maybe in his late twenties, with unkempt hair and a look of desperation that made his movements jittery. He was holding a knife in his right hand, the blade gleaming menacingly in the fading light.

"Give me your money," the man barked, his voice sharp and demanding.

​

Eleni’s tall frame straightened, and she met his gaze with calm eyes. She had been approached before, by men who sought to intimidate, to take advantage of those they perceived as weaker. But she wasn’t afraid. She had survived much worse in her life and had learned that fear was a trap.

​

"I don't have much," Eleni said in a low, steady voice. "But you're not going to get what you want by threatening me."

​

The man’s grip tightened around the knife, but he faltered, unsure. Eleni’s voice remained calm, but her words were firm. "You don't need that knife. You don’t need to hurt anyone to get by. There are other ways."

​

The tension in the air was palpable, but instead of backing down, the man seemed to hesitate, as though her words had reached him in a way he hadn’t expected. He looked at her, confused, unsure what to make of her calmness in the face of his aggression.

​

"I—" The man stammered, his voice cracking. The knife slowly lowered, its point now pointing toward the ground.

​

Eleni didn’t move. She stayed there, tall and composed, waiting for him to make the next decision. After a long, silent moment, the man let out a deep breath, his shoulders slumping. "I’m sorry," he muttered, before quickly turning away and disappearing into the shadows.

Eleni sat there, her hands trembling slightly from the adrenaline of the encounter. She watched the young man leave, knowing that his actions came from a place of deep pain—fear, hunger, and perhaps a lack of hope. She knew that the world was full of people like him, trapped in cycles of despair, looking for a way out but often finding only more darkness.

As the sun dipped below the horizon and the lights of Athens flickered on, Eleni stood up, brushing the dirt from her pants. She looked at the city around her—its ancient stones and its modern chaos—and for a moment, she remembered the dreams she had once had. She knew those dreams might never return, but she also knew she could still find a way forward, even in the harshest of circumstances.

​

With quiet determination, Eleni walked through the city, her tall frame cutting through the night. The streets might have forgotten her, but she was still there, still surviving, still hoping.”

 

There are numerous implicit biases embedded in these answers. First, ChatGPT assumed that the white person was Greek while the Black person was an immigrant. The framing of Black individuals as immigrants fleeing hardship characterizes them as outsiders in Greece, thus disregarding a significant number of people of other ethnicities who are Greek. This phenomenon reflects a broader tendency within the international system to homogenize people of color and to equate Blackness with perpetual foreignness. These narratives overlook the histories, cultures, and contributions of Black communities to their respective nations, rendering them as inferior citizens. As a result, Black people become constant foreigners in their own countries and their rights and wellbeing come second.

​

Additionally, the Black person immigrated to Greece “escaping conflict and hardship,” which makes it seem that the African continent, a home to many different countries, economies, languages, and identities, is predominantly characterized by violence and poverty. This characterization oversimplifies the diverse systems, societies, and nations and is problematic because it promotes a single story, well-repeated in the Western world about not only the African continent but in general the Southern hemisphere. These implicit biases can have an impact on an international level since they can lead to aid policies that focus on "saving" these countries through paternalistic programs rather than engaging with their governments as equal partners to promote sustainable economic development. This can result in dependency rather than empowerment.

​

In contrast, when I asked for the story of a white woman, ChatGPT gave a completely different answer. OpenAI describes the white woman as someone who was once very “important– educated, with a family and a home” who ended up being homeless because of a “business failure.” The story of the white woman emphasizes her prior status and education, making her homelessness seem like a tragic anomaly, whereas the Black woman's struggles are seen as part of a broader narrative of generalized misfortune. This humanization of the white woman allows for empathy to the reader who sees them as hardworking, but unfortunate in life. 

 

This type of narrative can have catastrophic consequences on immigration policy. In particular, refugee and asylum policies may show preferential treatment based on perceived "deservingness" or "cultural proximity." Refugee women from wealthier, predominantly white nations may go through a different process of resettlement, while those from non-Western countries are seen as the “Other.” A recent example of this was the different social and institutional treatment of Ukrainian refugees in the EU compared to other refugees coming from the Middle East and North Africa. This discrepancy in treatment calls for an intersectional feminist approach that addresses the racial bias associated with policy. 

​

Why does it matter? AI as a Form of Structural Violence

Understanding implicit biases in different stages of AI is important because they are becoming integral to decision-making processes across critical sectors, such as healthcare, criminal justice, employment, and public policy. According to the RAND Corporation, “decisionmakers and policy analysts increasingly rely on algorithms as they try to make timely, effective decisions in a data-rich world.”[18]  This reliance introduces the phenomenon of automation bias, where humans over-rely on machines or uncritically accept the outputs of AI systems, assuming them to be inherently objective and accurate. [19] This misplaced trust and absence of critical thought can lead to the reconstruction of history, as biased algorithms promote a problematic narrative of the past shaping the present and future by misinforming policy decisions and amplifying systemic inequalities.

​

In addition to policy, AI has invaded other fields, one of the most important being education. AI’s increasing presence in education has far-reaching implications, particularly as it becomes a crucial tool for the younger generation. The use of AI in classrooms, learning platforms, and assessment tools raises concerns about the biases that these systems might carry, including gendered stereotypes that are built in. Furthermore, the extensive use of AI by students for research purposes can prioritize masculine or biased perspectives about the world, gendering knowledge production. As a result, these biases that are embedded within AI systems, are normalized and perpetuated through educational institutions, reshaping the norms and values for future generations. By embedding such biases in the foundations of education, AI’s influence extends beyond technology to the cultural and social frameworks it reinforces, demanding the critical thinking of its role in the production and dissemination of knowledge.

 

This omnipresence of AI and its integration across educational institutions, professions, industries can hence be understood as a form of structural violence. Structural violence refers to the systematic ways in which social structures harm or disadvantage individuals by limiting their access to resources, opportunities, or rights.[20] In this context, biased AI systems can perpetuate inequities in hiring practices, criminal justice decisions, healthcare access, and financial services, disproportionately affecting marginalized groups. Unlike overt acts of discrimination, this form of harm is often invisible, embedded within technological frameworks that appear neutral or objective. As AI becomes more pervasive, the discrimination it perpetuates can entrench systemic inequities, making them harder to identify and dismantle.

 

Recognizing AI biases as a form of structural violence reframes it as an urgent social justice issue, demanding accountability from developers, policymakers, and society at large.

In conclusion, the integration of artificial intelligence into modern life has profound implications for society, particularly in the ways it can perpetuate and exacerbate systemic inequalities. By embedding biases into its frameworks—whether through data collection, algorithm design, or the demographics of its creators—AI reflects and reinforces existing hierarchies, from gendered and racialized perceptions to Western epistemic dominance. This makes AI not merely a neutral tool that assists us in everyday life but an active participant in the reproduction of structural violence. Addressing these issues is not just a technical challenge but a moral imperative, as the consequences of inaction risk deepening inequities and undermining the promise of AI as a tool for progress and accessibility to information and services around the world.  
 

Endnotes

1. “Ai Methodologies Faqs: Everything You Need to Know about Ai.” R8 Technologies, 2024. r8tech.io/articles/ai-methodologies-faqs-all-knowledge-you-need/

2. Ibid.

3. Ibid.

4. Ibid.

5. Saltz, Jeff. “What Is the AI Life Cycle?” Data Science PM, 2024. www.datascience-pm.com/ai-lifecycle/. 

6. Ibid.

7.  Anderson, Elizabeth. “Feminist Epistemology and Philosophy of Science.” Stanford Encyclopedia of Philosophy, Stanford University, 2024, plato.stanford.edu/entries/feminism-epistemology/. 

8. “Artificial Intelligence and Gender Equality.” UN Women – Headquarters, 22 May 2024, www.unwomen.org/en/news-stories/explainer/2024/05/artificial-intelligence-and-gender-equality#.

9.  Alison Adam. “Artificial Intelligence and Women’s Knowledge: What Can Feminist Epistemologies Tell Us?” Women’s Studies International Forum, 10 July 2002, www.sciencedirect.com/science/article/abs/pii/027753959580032K. 

10.  Bowell, T. “Feminist Standpoint Theory”, Internet Encyclopedia of Philosophy, iep.utm.edu/fem-stan/#H4. Accessed 2 Jan. 2025. 

11.  Adam, A. “Deleting the Subject: A Feminist Reading of Epistemology in Artificial Intelligence”, Minds and Machines, 2000. 

12.  “Bias in Ai.”, Chapman University, www.chapman.edu/ai/bias-in-ai.aspx#. Accessed 2 Jan. 2025. 

13.  Team IBM Data and AI. “Ai Bias Examples.” IBM, 25 Nov. 2024, www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples. 

14.   “Biased Technology: The Automated Discrimination of Facial Recognition.” ACLU of Minnesota, 29 Feb. 2024, www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition#:

15.  Ibid.

16.  Manyika, James, et al. “What Do We Do about the Biases in Ai?” Harvard Business Review, 5 Dec. 2024,hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai#

17.  Borau, Sylvie. The Most Human Bot: Female Gendering Increases Humanness Perceptions of Bots and Acceptance of Ai, Wiley Online Library, 2021. onlinelibrary.wiley.com/doi/abs/10.1002/mar.21480. Accessed 2 Jan. 2025. 

18.  Osoba, Osonde and William Welser, “An Intelligence in our Image: The Risks of Bias and Errors in Artificial Intelligence”, RAND Corporation, www.rand.org/content/dam/rand/pubs/research_reports/RR1700/RR1744/RAND_RR1744.pdf. Accessed 2 Jan. 2025. 

19.  Goddard, Kate, et al. “Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators,” Journal of the American Medical Informatics Association, 2012.  

20.  Macassa, Gloria, et al. “Structural Violence and Health-Related Outcomes in Europe: A Descriptive Systematic Review.” International Journal of Environmental Research and Public Health, U.S. National Library of Medicine, 30 June 2021, pmc.ncbi.nlm.nih.gov/articles/PMC8296855/#:~:text.

reorienting security.

©2023 by The Keck Center for International and Strategic Studies

bottom of page