Bias in AI isn’t a flaw but a system of control

Bias in AI isn’t a flaw but a system of control
Representative image. Credit: ChatGPT

Artificial intelligence (AI) systems, particularly large language models, are increasingly being treated as neutral and objective tools, but a new study argues that this perception is fundamentally flawed. Instead of simply reflecting reality, these systems actively shape knowledge, normalize dominant worldviews, and influence how people understand truth, identity, and authority.

The research, titled "From 'objectivity' to obedience: LLMs as discourse, discipline, and power" and published in AI & Society, presents a critical shift in how AI bias should be understood, arguing that bias is not a technical malfunction but a structural outcome of the political, cultural, and epistemic systems in which AI is developed.

Based on the philosophical framework of Michel Foucault, the study describes LLMs as "discursive apparatuses" that organize how knowledge is produced, validated, and circulated. This reframing moves the debate beyond fairness metrics and technical fixes, toward a deeper examination of how AI systems shape the conditions under which truth itself is constructed.

AI systems shape knowledge, not just reflect it

The study challenges the widely held assumption that AI systems merely mirror existing data. Instead, it argues that large language models actively participate in shaping knowledge by privileging certain ways of speaking, reasoning, and understanding the world.

These systems are trained on vast datasets that are historically and culturally structured, often reflecting dominant perspectives rooted in Western, Anglophone, and institutional contexts. As a result, the outputs generated by AI are not neutral representations of reality but probabilistic reproductions of historically dominant patterns.

The research shows that this process operates through mechanisms such as probabilistic weighting, repetition, and normalization. Language patterns that appear more frequently in training data are assigned higher importance, making them more likely to be reproduced in AI-generated outputs. Over time, this reinforces dominant narratives while marginalizing alternative or subaltern perspectives.

This dynamic extends beyond content generation to the structure of explanation itself. AI systems shape how problems are framed, what counts as a reasonable argument, and which forms of expression are considered legitimate. In doing so, they influence not just what is said but how it can be said.

This form of influence is subtle but pervasive, the study notes. Unlike earlier algorithmic systems that classify or predict behavior, generative AI operates at the level of discourse, shaping meaning and interpretation rather than simply organizing data.

Reinforcement learning turns human norms into algorithmic authority

The study focuses on the role of Reinforcement Learning from Human Feedback, a widely used method for aligning AI systems with human expectations. While often presented as a safety mechanism, the research argues that this process plays a key role in transforming subjective human judgments into standardized algorithmic norms.

In this process, human evaluators rank AI-generated responses based on criteria such as helpfulness, safety, and appropriateness. These judgments are then encoded into reward models that guide the behavior of the AI system. Over time, the system learns to prioritize responses that align with these norms.

These norms are not neutral. They are shaped by institutional priorities, cultural expectations, and corporate policies. Once embedded into AI systems, they become invisible but powerful forces that define what counts as acceptable or authoritative knowledge.

This creates what the research describes as "truth effects without truth procedures." AI systems produce outputs that appear balanced, reasonable, and authoritative, even though they are not grounded in a process of verification or critical inquiry. Authority is derived from conformity to established norms rather than from the accuracy or validity of the information.

By privileging moderation, neutrality, and consensus, AI systems can smooth over conflict, depoliticize complex issues, and reinforce existing power structures. Perspectives that challenge dominant narratives may be less likely to appear, not because they are incorrect, but because they are statistically less probable within the system.

AI reorganizes power through discourse and global systems

The study claims that AI systems are deeply embedded in global power structures. The development of large language models is concentrated in a small number of corporations and institutions, primarily in the United States and Europe, which shape the technical and ethical frameworks of AI.

This concentration of power influences not only which technologies are developed but also which forms of knowledge are prioritized. Training datasets often privilege dominant cultural norms, reinforcing what the study describes as global epistemic hierarchies.

As these systems are deployed worldwide, they extend these hierarchies across different contexts, shaping how knowledge is produced and understood on a global scale. This process has been described as a form of algorithmic coloniality, where certain worldviews are universalized while others are marginalized.

The study also highlights the role of AI in governance. Increasingly, institutions rely on algorithmic systems to support decision-making in areas such as education, employment, and public policy. While earlier systems focused on classification and prediction, generative AI influences how decisions are explained and justified.

This shift represents a new form of power. Rather than directly controlling behavior, AI systems shape the interpretive frameworks through which decisions are understood. They influence what appears rational, legitimate, and common-sense, effectively narrowing the range of possible perspectives.

This form of epistemic power is particularly difficult to detect and challenge. Because AI outputs are framed as neutral and objective, they can obscure the underlying assumptions and biases that shape them.

A call for a new epistemology of artificial intelligence

The study calls for a fundamental rethinking of how AI is understood and governed. Rather than treating bias as a technical problem to be fixed, it argues for a critical epistemology of AI that examines the deeper structures of knowledge and power embedded in these systems.

This approach emphasizes that knowledge is always situated and shaped by social, cultural, and historical contexts. It challenges the notion that data can be neutral or that algorithms can operate independently of the values and assumptions embedded in their design.

The research advocates for greater transparency, accountability, and participation in the development and deployment of AI systems. This includes making the assumptions and limitations of AI models visible, enabling users to question and challenge their outputs, and creating mechanisms for incorporating diverse perspectives into AI design.

It also calls for a shift from centralized control to more distributed forms of epistemic authority. Instead of relying on a small number of institutions to define what counts as valid knowledge, the study suggests that AI systems should be designed to support plurality, contestation, and democratic engagement.

Overall, the study argues that the goal should not be to eliminate bias entirely, which is impossible given the historical nature of data, but to create systems that acknowledge their limitations and remain open to critique and revision.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback