Generative AI in art reinforces inequality rather than creativity


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-01-2026 12:17 IST | Created: 23-01-2026 12:17 IST
Generative AI in art reinforces inequality rather than creativity
Representative Image. Credit: ChatGPT

Generative AI is now raising urgent questions about whose work is valued, whose labor is erased, and which social biases are quietly reinforced in the process. With AI-generated images winning art competitions, flooding social platforms, and entering commercial pipelines, critics argue that the technology is not disrupting inequality so much as accelerating it.

A new peer-reviewed study published in AI & Society titled "Garbage in, garbage out? How the monster of AI art reflects human fault, bias, and capitalism in contemporary culture," positions AI-driven art systems not as autonomous creators but as mirrors of existing social, economic, and cultural structures.

AI art and the illusion of creative autonomy

According to the research, generative models do not learn in the way human artists do. They are trained on vast datasets of existing cultural output and optimize their results toward patterns that already dominate visual and commercial culture. As a result, AI-generated art reflects what has been most visible, most profitable, and most widely circulated in the past.

This dynamic explains why AI art systems consistently favor polished, hyperreal, and aesthetically smooth outputs. These traits align with platform algorithms and online attention economies, where images must perform instantly and translate easily across feeds, ads, and marketplaces. The study finds that when AI systems are prompted to generate new artistic styles or represent less established artists, their outputs often collapse into generic forms or misrepresent the artist entirely. Novelty, in this context, becomes an illusion built from recombined conventions.

The research also addresses controversies surrounding AI artworks winning competitions historically designed to recognize human skill and labor. These moments, widely debated in the art world, expose a structural mismatch between how creativity is judged and how AI operates. Judges often evaluate final outputs without full visibility into the training data, automation level, or labor displacement involved. The result is a growing tension between human authorship and machine-assisted production, with few institutional safeguards to distinguish between the two.

The study rejects the idea that AI's limitations are merely technical growing pains. Instead, it frames them as structural features of systems designed within capitalist production models. AI art is optimized for efficiency, replication, and market recognition, not for experimentation, risk, or conceptual depth. This orientation, the paper argues, is why AI struggles to move beyond imitation even as it appears increasingly sophisticated.

Bias, appropriation, and the political economy of AI creativity

The study documents how AI art systems intensify long-standing inequalities in cultural production. One of its key findings is that AI training practices rely heavily on uncredited and non-consensual use of existing creative work. Marginalized artists, particularly women and artists of color, are disproportionately affected because their work is often absorbed into datasets without compensation while being rendered invisible in final outputs.

The paper places this practice within a broader history of artistic appropriation, arguing that AI accelerates rather than invents these dynamics. By scaling extraction to industrial levels, generative systems normalize the removal of labor from creative value. What once required direct copying or citation now occurs invisibly through automated scraping and pattern learning, making accountability harder to trace.

Gendered and racial bias emerges as a recurring theme in the analysis. The study shows how AI-generated imagery consistently reproduces narrow beauty standards rooted in Eurocentric and hypersexualized norms. Female-presenting figures are frequently rendered as youthful, smooth, and stylized in ways that flatten identity and erase diversity. These patterns are not accidents but the result of training data shaped by platform popularity, advertising aesthetics, and historical imbalance in representation.

The research links these outcomes to what it describes as techno-patriarchal structures embedded in AI development. The majority of large-scale AI systems are built and governed by technology firms operating within male-dominated industries and market-driven incentives. This context shapes not only what data is used but which outputs are rewarded, monetized, and promoted.

Economic pressures further compound these issues. As AI tools reduce the time and cost required to generate visually appealing content, they undercut creative labor markets already marked by precarity. Artists are pushed to compete with automated systems that can produce endless variations at minimal cost, while platforms reward speed and volume over depth and process. The study notes that these conditions mirror trends in other sectors affected by automation, where efficiency gains mask deeper losses in worker autonomy and recognition.

Environmental costs also enter the analysis. Training and running large AI models requires significant energy and water resources, tying AI art production to broader concerns about sustainability. While these costs are often invisible to end users, the research argues that they must be considered alongside ethical debates about authorship and labor.

Rethinking ethics, accountability, and the future of creative AI

The study calls for a fundamental shift in how AI art is understood and governed. Rather than treating AI as an independent creative agent, the paper frames it as an extension of human decision-making systems. Every output reflects choices about data selection, model design, and economic priorities. Responsibility, therefore, cannot be delegated to the machine.

The research identifies copyright law as a critical area of failure. Current legal frameworks struggle to address AI-generated work because they are built around human authorship. This gap leaves artists vulnerable when their work is used for training and allows corporations to profit from ambiguous ownership structures. The study argues that piecemeal copyright fixes will be insufficient without broader integration of human rights and labor protections into AI regulation.

Ethical development, according to the paper, requires meaningful artist participation in the design and governance of creative AI systems. This includes transparent training practices, consent-based data use, and mechanisms for attribution and compensation. Without these safeguards, AI art risks deepening mistrust between creators and technology while consolidating power within a small number of platforms.

The study also challenges the idea that AI must inevitably replace human creativity. Instead, it suggests that alternative models are possible if AI is positioned as a supportive tool rather than a substitute. Such an approach would prioritize augmentation over automation and recognize the social, emotional, and contextual dimensions of artistic practice that machines cannot replicate.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback