Problematic Stereotypes in AI

Alexina Gillis

Critical Theory

Final Project

Problematic Stereotypes in AI

As an English major, the recent rise of AI in art and literature can be increasingly worrisome for a multitude of reasons. The first being the concern that AI has the ability to rip off peoples’ hard work and use it to build its own narrative. As someone that wants to go into publishing or journalism, I worry that I may run into this problem more and more often as time goes on. Plagiarism has always been a huge no-no, so why is it okay for AI to do it? Other than the effects AI will have on writing and literature, it also puts the jobs of artists at risk. If AI can generate any sort of picture or writing with just a few key words, why would we need actual people (who need to get paid) do the work when it can be done for free? Is there a difference in the authenticity of the piece? These are questions I will have to keep myself aware of as I pursue my career in writing. 

Personally, I believe that a piece is more authentic and original if it comes from an actual human being, rather than AI who has gathered other things from across the internet to create its piece. I became more aware of this after reading Matthew Cheney’s “There Is No Ethical Use of AI”. Cheney comes up with a list that describes some of the reasons that AI is unethical. One of his points is “AI language models are trained on huge amounts of writing that the corporations who own the tools did not get consent to use;” (Cheney). Consent is a massive issue within the debate of AI, and this has to do with writing as well as any type of art. Other than the consent issue, there are also issues regarding our environment and climate change. His first point is “This is a technology that requires tremendous resources of energy, water, infrastructure, and thus has a significant impact on the environment;” (Cheney). Some may argue that the use of AI helps save resources, when in actuality it does the exact opposite. The term for this electrical waste is “ewaste”. This is the materials used and wasted while working on AI. Ai also requires a large amount of energy to run and train, that ends up leaving a massive carbon footprint in the end. 

According to a study by OpenAI researchers, the energy used to run AI programs has doubled almost every three months since 2021. (Kanungo, Earth.org).  This means that this will inevitably keep going up. To put this number into perspective, a study was done by researchers at the University of Massachusetts where they found that “training can produce about 626,000 pounds of carbon dioxide, or the equivalent of around 300 round-trip flights between New York and San Francisco – nearly 5 times the lifetime emissions of the average car.” (Kanungo). This is an insane amount of energy being used and something I do not think people consider often enough. 

Another big problem with AI is that it reinforces and perpetuates problematic stereotypes that have to do with race and gender. Because AI uses material already on the internet, it also gathers some less than appetizing views that are put out into the world. This can be seen when asking AI to generate certain images. According to netcentric.com, “Technology is a reflection of our society. Because AI is trained on existing data, it’s easy for bias to permeate AI-generated content and results. Countless studies in the last few years have shown that generative AI amplifies existing biases in their output. A study by Leipzig University and AI startup Hugging Face used 3 popular AI image-generating models to generate 96,000 images of people using different terms and saw that when given a prompt like “CEO” or “director”, 97% of the generated images showed white men.” (Netcentric). 

https://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/

This also applies to searching “scientist” or “engineer”. So, while it is also reinforcing problematic gender stereotypes, it is also doing it with race. The fact that only white men are generated when AI is given these keywords is proof that AI cannot be completely fair and ethical. Imagine what that could do to a young girl wanting to get a job in stem? If AI becomes a lot more mainstream, and little kids are using it to see what their dream jobs look like, I can imagine seeing it only be represented by people that don’t look like you can be incredibly discouraging. There are “norms” and expectations in our society, and even AI is conforming to it. 

Some of these issues surround the topic of race as well, and recently there was an issue regarding this with Google. According to Netcentric “This has become such a heated topic that in the last weeks, Google tried to correct the functionality of its own Gemini AI tool, but it backfired. In an attempt to push more diversity into its generated images, it started generating images of women and people of color when given prompts like “American founding fathers” or “WW2 soldiers”. The inaccuracy and erasure of real historical discrimination caused even more backlash – and it’s not the answer to promoting diversity in generative AI.” (Netcentric). So, while trying to be more inclusive, the AI instead created more backlash and an even more problematic situation.

According to Chapman University, there are four stages where bias in AI can take place. These are: 

“Data Collection: Bias often originates here. The AI algorithm might produce biased outputs if the data is not diverse or representative.

Data Labeling: This can introduce bias if the annotators have different interpretations of the same label.

Model Training: A critical phase; if the training data is not balanced or the model architecture is not designed to handle diverse inputs, the model may produce biased outputs.

Deployment: This can also introduce bias if the system is not tested with diverse inputs or monitored for bias after deployment.” (Chapman University). While these are stages of how the problematic biases happen, there are also different types of biases. These are: Selection bias, confirmation bias, measurement bias, stereotyping bias, and out-group homogeneity bias. This ranges from only studying a small range of material, to viewing trends and basing images off of that and reinforcing stereotypes or not being able to differentiate people that are not part of a majority group.

Realistically, there is no way that AI can be used in a fully positive and ethical way. I would have to happily agree with Matthew Cheney, and say that there really is no ethical use of AI. We have spent all of history trying to erase as many stereotypes and harmful ideas as humanly possible, and AI is pedaling us backwards. 

Works Cited

Cheney, Matthew. “There Is No Ethical Use of AI.” Finite Eyes, WordPress, 24 Mar. 2024, finiteeyes.net/technology/there-is-no-ethical-use-of-ai/. 

Heikkilä, Melissa. “These New Tools Let You See for Yourself How Biased AI Image Models Are.” MIT Technology Review, MIT Technology Review, 23 Mar. 2023, http://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/. 

Kanungo, Alokya. “The Real Environmental Impact of AI.” Earth.Org, Earth.org, 5 Mar. 2024, earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/. 

“Tackling Gender Bias with AI: Strategies for a More Inclusive Future.” Connect with Your Customers, Netcentric, 8 Mar. 2024, http://www.netcentric.biz/insights/2024/03/gender-bias-ai-international-women-day#:~:text=Left%20unchecked%2C%20these%20biases%20can,plays%20in%20perpetuating%20our%20stereotypes. 

University, Chapman. “Bias in Ai.” Bias in AI | Chapman University, Chapman University, http://www.chapman.edu/ai/bias-in-ai.aspx#:~:text=Stereotyping%20bias%3A%20This%20happens%20when,with%20certain%20genders%20or%20stereotypes. Accessed 8 May 2024. 

Leave a Comment