Is AI Art Really ‘Art?’

Recent breakthroughs in how humans teach artificial intelligence have allowed generative AI models to become highly prevalent and easily accessible in our current day and age. In the training process, machine learning models are often fed data, then human trainers will categorize this data and ‘teach’ AI how to replicate similar responses through system guidelines and distinct criteria. Through this training, there are now several generative AI models available for public use, like OpenAI’s Chat-GPT and Dall-E. With their publicity has come a range of debate and discourse; as AI becomes deeply integrated into the day-to-day life of some countries—despite being almost completely inaccessible in ‘technology deserts’—across the globe, we have seen the emergence of distinct ‘factions,’ notably people who either love or hate AI. In this vein, some people argue that AI can produce meaningful art. Others say this is impossible, as there is no purposeful decision-making in AI, just algorithms. In America, several cultural landmarks for AI include the launching of Chat-GPT and the use of generative AI in commercials for national TV, like Coca-Cola’s holiday 2024 commercial, which was completely AI-generated. As our technology and social landscape evolve, so does human art, and AI alongside it. However, art has historically been created as a form of human rebellion, protest, and social critique. Without the lived experience of human society and this greater cultural awareness, generative AI would, by the definition of human art, be unable to create a piece of cultural, ethnic, or intellectual ‘art.’ The fact that AI has been trained off of non-consensually collected data, is immensely biased towards people of color, and has an immense environmental impact (with the cooling AI servers), this generated art would still serve to make a political statement in and of itself, despite being rather separated from the historical and moral values of human art. Thus, AI art seems to stand in its own category of ‘generated things,’ far separate from the creations of people. If you view artificial intelligence in this way, AI even has its own political values, stemming from the unresolved biases of human beings that serve a greater capitalistic and Euro-centric purpose that cannot be ignored in the face of convenience and aesthetics. Through the lens of socioeconomic discourse, systemic critical race theory, ecocriticism, and poststructuralism, the generated responses of artificial intelligence will be analyzed as its own type of art, with its own inherent political message created by both the users of AI and the trainers, owners, and scientists behind a given AI model.

The negative environmental impact of artificial intelligence is intrinsic to 100% of its use—whether AI is being used to analyze medical data, CVs, or to write a 250-word short story—as the server rooms that currently store AI data and house these systems use massive amounts of water to keep them cool. In a study entitled, Making AI Less ‘Thirsty,’ researchers note,“The global AI demand is projected to account for 4.2-6.6 billion cubic meters of water withdrawal in 2027… To respond to the global water challenges, AI can, and also must, take social responsibility and lead by example by addressing its own water footprint” (Peng Li et al 2023). So, not only is global AI usage incredibly taxing on our environment—and it will continue to be that way until someone invents environmentally sustainable data storage and servers—using AI implicitly supports the climate crisis. When someone uses AI to write a story or to create a photograph, this environmental undertone cannot and should not be divorced from the meaning of their work; to create AI art is to stand by the malpractices of the tech industry. Ecocriticism considers how the environment is portrayed in literature. And in artificially produced art, regardless of the user prompt, the environment is inherently forsaken. The user could aesthetically or politically support the natural earth in their prompt—hoping to get an image filled with beautiful nature or a new poster promoting Smokey the Bear, for example. However, to request for these items to be created by an AI system in the first place would contradict those motives. Some people, however, have little to no concern for the environment or do not feel as though this would affect the meaning behind their work. This generally seems to stem from a fundamental misunderstanding of what art is. Some pieces of art are said to be purely aesthetic or a study of beauty, but it is important to note that a cultural understanding of beauty is often formulated by trends, class status, marketing and media, and proximity to money. For example, some cultures have historically viewed fair skin as more desirable. Thus, having a painting of a beautiful person with pale skin—even if the picture merely attempts to be just ‘aesthetic’—also has larger cultural undertones that speak to society. As artificial intelligence models are trained on human data, they reflect our social biases and further our technological failings, particularly when it comes to environmental sustainability. To ignore this is a shortcoming of a larger capitalistic system that divorces the means of production from the meaning of the product, which ultimately serves to reduce the value of ethically made human art.  

AI art exposes the flaws within our society, especially if data is categorized by a non-diverse set of human graders. One study has quantified this discrimination by looking at AI-produced images. The Washington Journal of Law, Technology & Arts notes, “In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs” (Washington Journal of Law, Technology & Arts 2023). In this way, AI showcases the stereotypes pushed on people of color by predominantly white societies—where many AI systems are being primarily developed and tested. The United States—Silicon Valley in particular—and England are currently two such powerhouses of AI production. To proctor the generation of a piece of AI art, the user is complicit to these inherent biases. The system that produces a stereotypical piece of art is the same that can generate a non-stereotypical image. Additionally, it’s important to note that, “a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand” (Washington Journal of Law, Technology & Arts 2023). And so, while the algorithmic and human biases that create stereotypical images exist, there is no such thing as unbiased AI images, and thus these political undertones continue to occur while these models build off each other. Without starting these models from scratch, these sources of bias will remain, even if just in small fragments.

Furthermore, this bias also promotes socioeconomic disparity. Not only is this notable in AI’s depiction of race, but generative AI also threatens to displace artists. Small artists who may have otherwise received commissions to bring a person or company’s vision to life may not have the same amount of work to complete. This is especially true for corporate artists, graphic designers, and animators. Coca-Cola’s 2024 holiday commercial, “The Holiday Magic is coming.” was entirely created by Real Magic AI. Despite being “the largest beverage manufacturer and distributor in the world” (Encyclopaedia Britannica 2025), Coca-Cola decided to promote AI alongside its business—likely hoping to use the innovation of AI technology as a marketing tactic. The trailer is aesthetically well-edited, but the clips themselves are jarring, with the quality of footage used being noticeably low overall, creating its own sense of AI ‘uncanny valley.’

When it comes to all—generative and non-generative—AI’s effects on class, Sam Manning, a senior research fellow for the Centre for the Governance of AI, notes, “In the near-term, AI-driven productivity boosts could be skewed towards high-income workers, leaving lower-wage workers behind” (Manning 2024). While the general public’s fear of AI job displacement might not be completely reasonable at present, it is true that—in many ways—human work is being undervalued for the sake of these technological innovations. Without careful consideration, proctoring, and legislation, AI will continue to further the socioeconomic disparities throughout the world; this will occur within individual countries, as well as pushing some nations into a globally impoverished status. At this point, it is impossible to say this will be caused by generative AI alone. However, generative AI will push artists to value their creations in a new way and price them differently to align with the market standards of OpenAI’s relatively cheap—if not free—services. This is a moral implication that speaks to how creativity and arts are valued across societies, particularly in America and England, which are notable AI hubs across the globe. Similar to the environmental effects of artificial intelligence, the socioeconomic divide might grow because of AI, and it may seem somewhat detached from the inherent generation of images or writing. However, under traditionally capitalistic systems, money spent is an expression of values. Putting this money into machinery over human creativity emphasizes technology, simplicity, and generation speed over nuance, experience, and emotional intelligence. If an artificially intelligent culture is created, this will shift the class divide to include even more artists than before.

Viewing artificial intelligence and its effects through the lens of ecocriticism, critical race theory, and socioeconomic status, it is clear that AI art is free of politics or meaning. Rather, we, as human shepherds to the machine, have pushed our cultural values, beliefs, and stereotypes onto AI. So, if one is to successfully analyze AI art, it cannot be done without acknowledging and breaking down these larger moral and ethical deficits. Human art can have similar effects or potentially perpetuate similar stereotypes. However, that is a human being making that choice. When AI perpetuates a stereotype, the model is not making that choice; the human being who created biased data for the AI or the human being who is grading the quality of agent responses made that choice. AI only speaks for the larger moral failings of human society, and it can never be any better than the world it comes from. Without considering these facets, it is impossible to view AI generations with depth and to encourage a culture of accountability.


Works Cited

Art For Our Sake: Artists Cannot Be Replaced by Machines – Study |. 3 Mar. 2022, http://www.ox.ac.uk/news/2022-03-03-art-our-sake-artists-cannot-be-replaced-machines-study.

Arts, Washington Journal of Law Technology &. “The Complexities of Racism in AI Art.” Washington Journal of Law, Technology & Arts, 5 Nov. 2023, wjlta.com/2023/11/04/the-complexities-of-racism-in-ai-art.

Britannica, The Editors of Encyclopaedia. “The Coca-Cola Company.” Encyclopedia Britannica, 11 Mar. 2025, https://www.britannica.com/money/The-Coca-Cola-Company. Accessed 15 March 2025.

Chatterjee, Anjan. “Art in an age of artificial intelligence.” Frontiers in psychology vol. 131024449. 30 Nov. 2022, doi:10.3389/fpsyg.2022.1024449

Coca-Cola. “The Holiday Magic Is Coming.” YouTube, 18 Nov. 2024, http://www.youtube.com/watch?v=4RSTupbfGog.

Li, Pengfei, et al. “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models.” arXiv.org, 6 Apr. 2023, arxiv.org/abs/2304.03271.

Manning, Sam. “AI’s Impact On Income Inequality In the US.” Brookings, 3 July 2024, http://www.brookings.edu/articles/ais-impact-on-income-inequality-in-the-us.

The Open Learning & Teaching Collaborative. “How Do We Know What We Know? Evaluating AI and Other Sources of Information – the Open Learning &Amp; Teaching Collaborative.” The Open Learning & Teaching Collaborative, 22 May 2024, colab.plymouthcreate.net/resource/spin-2024-teaching-in-gen-z/how-do-we-know-what-we-know-evaluating-ai-other-sources-of-information.

Leave a Comment