How AI systems depend on human cognition and labour
A new study by Olivia Guest of Radboud University is challenging one of the most widely accepted ideas in artificial intelligence, arguing that so-called "human-centred AI" may be fundamentally misunderstood. The research suggests that AI has always depended on human cognition, labour, and social relationships, often in ways that remain hidden from view.
Published in Behavioral Sciences under the title "What Does 'Human-Centred AI' Mean?", the study reframes AI not as an autonomous technological force, but as a sociotechnical relationship in which human cognitive work is redistributed, obscured, or displaced. The paper argues that every AI system, from simple tools like calculators to advanced systems like large language models, relies on human input, interpretation, and oversight, even when this involvement is concealed.
Redefining AI: From autonomous systems to human relationships
The study dismantles the prevailing narrative that AI systems operate independently of human influence. Instead, Guest proposes a radical redefinition: AI should be understood as any relationship between humans and machines where cognitive labour appears to be offloaded onto technology.
This shift moves the focus away from the capabilities of machines and toward the interaction between humans and technology. According to the research, AI is not defined by intelligence or autonomy, but by how it reorganizes human effort. In this framework, even historical tools such as abacuses or alarm clocks qualify as forms of AI, as they redistribute cognitive tasks like calculation or timekeeping.
The study categorizes these relationships into three types: enhancement, where technology improves human capabilities; replacement, where machines perform tasks without fundamentally altering human skills; and displacement, where technology undermines or erodes human cognitive abilities. This classification allows for a more nuanced understanding of how AI systems impact society, moving beyond simplistic narratives of progress or threat.
The research challenges the idea that modern AI represents a radical break from the past. Instead, it situates current technologies within a long historical continuum of tools that have shaped human cognition. By doing so, it questions the notion that today's AI systems are uniquely transformative, arguing that their effects can only be understood by examining their relationship with human labour.
The hidden workforce behind AI systems
Contemporary AI systems rely heavily on what it describes as "obfuscated human labour." While these systems are often presented as automated and self-sufficient, they depend on vast networks of human workers for training, maintenance, and real-time operation.
The paper highlights how modern AI models, including chatbots and image generators, are built on datasets created and curated by humans, often under exploitative conditions. In many cases, low-paid workers perform tasks such as data labeling, content moderation, and output refinement, ensuring that AI systems function as intended. This hidden workforce plays a critical role in shaping AI outputs, yet remains largely invisible to users.
The study draws parallels between these practices and historical examples of concealed labour, such as the Mechanical Turk—a 19th-century illusion in which a human chess player was hidden inside a machine to create the appearance of automation. Similarly, today's AI systems can give the impression of independent intelligence while relying on human intervention behind the scenes.
This phenomenon extends beyond training data to the everyday use of AI systems. Users themselves contribute cognitive labour by crafting prompts, interpreting outputs, and integrating AI-generated content into their work. However, this contribution is often minimized or dismissed, reinforcing the perception that AI systems are doing the thinking.
The research argues that this widespread obfuscation of human labour has significant implications for how AI is understood and regulated. By masking the human effort behind AI systems, it becomes easier to attribute capabilities and agency to machines, leading to exaggerated claims about their intelligence and potential.
The risks of cognitive displacement in the AI era
While some technologies enhance human abilities, the study warns that many contemporary AI systems fall into the category of displacement, where they actively undermine human cognitive skills. This is particularly evident in applications such as essay writing, artistic creation, and social interaction, where AI systems are increasingly positioned as substitutes for human effort.
The research suggests that reliance on these systems can lead to "deskilling," as users become less capable of performing tasks independently. For example, using AI to generate written content may reduce a person's ability to develop arguments or communicate effectively. Similarly, image generation tools can diminish the role of human creativity, while chatbots may replace meaningful social interactions with artificial substitutes.
This process is not limited to individual users but has broader societal consequences. The study highlights how technological displacement has historically affected certain groups more than others, citing the example of human "computers", often women, who were replaced by electronic machines in the 20th century. This pattern, described as "Pygmalion displacement," reflects a recurring tendency to devalue human labour while elevating technological systems.
In the context of modern AI, similar dynamics are at play. The study points to the exploitation of workers in global supply chains, as well as the appropriation of user-generated data, as evidence that AI systems can perpetuate existing inequalities. By framing these systems as autonomous, the underlying human contributions, and the conditions under which they are produced, are obscured.
The research also raises concerns about the psychological impact of AI systems that mimic human interaction. When users engage with chatbots designed to simulate companionship, they may develop emotional attachments to entities that lack genuine understanding or empathy. This can lead to confusion about the nature of human relationships and potentially harm mental well-being.
Rethinking human-centred AI in practice
The study argues that the priority should be to recognize and address the human labour embedded within these systems. This involves acknowledging the role of human workers in the development and operation of AI, as well as ensuring that their contributions are fairly compensated and visible. It also requires a shift away from benchmarking AI systems against human performance, which the study critiques as a flawed and misleading approach.
Instead, the research advocates for a framework that evaluates AI based on its impact on human cognition and society. This includes assessing whether a system enhances, replaces, or displaces human abilities, and considering the ethical implications of each outcome.
The study further emphasizes the need to move beyond what it describes as "AI hype," which often exaggerates the capabilities of technology while downplaying its limitations. By grounding AI in its sociotechnical context, it becomes possible to develop more realistic expectations and more effective policies.
- FIRST PUBLISHED IN:
- Devdiscourse