Mind behind machine: How human self-understanding shapes AGI’s future

The research highlights the philosophical urgency of AI development. As global efforts to achieve general intelligence accelerate, the study calls for renewed dialogue between computer science, cognitive psychology, philosophy, and ethics. Without this interdisciplinary foundation, AGI could reproduce the same blind spots and inequalities that already mark digital capitalism.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-11-2025 20:19 IST | Created: 03-11-2025 20:19 IST
Mind behind machine: How human self-understanding shapes AGI’s future
Representative Image. Credit: ChatGPT

A new study published in AI & Society argues that the kind of Artificial General Intelligence (AGI) humanity builds will depend heavily on which theory of mind it adopts. The research titled "AGI Imagined: How Is AGI Configured by the Theories of Mind," warns that divergent assumptions about what a "mind" is could lead to radically different AI architectures, some capable of empathy and ethical understanding, others locked into narrow computational logic.

The authors frame their work as a philosophical intervention in the global AI debate. They argue that the quest to create a thinking machine is as much a theoretical and moral enterprise as it is a technical one. The authors outline two dominant paradigms, the Computational Theory of Mind (CTM) and the Embodied Cognition Theory (ECT), and examine how each would configure AGI's architecture, social behavior, and value alignment with human life.

How theories of mind shape the machines we build

The authors suggest that every stage of AI development reflects an underlying theory about what the human mind is and how cognition works.

The Computational Theory of Mind, which dominates contemporary AI research, sees thinking as the manipulation of symbols according to formal rules. From this perspective, intelligence is a form of information processing, independent of the material substrate. A mind can, therefore, be "uploaded" or replicated on non-biological hardware.

Under CTM, AGI would function as an advanced symbolic processor, exceptionally powerful in reasoning, pattern recognition, and problem-solving but detached from the embodied, emotional, and social dimensions of cognition. Such systems may achieve superhuman logic yet lack the contextual understanding and ethical grounding that guide human decision-making.

On the other hand, the Embodied Cognition Theory (the 4E framework, embodied, embedded, enactive, and extended cognition) argues that intelligence arises from the interaction between mind, body, and environment. Cognition is not confined to internal representations but is distributed across sensory, motor, and social systems.

An AGI built on embodied principles would learn not through abstract symbols alone but through situated experience, movement, and relational feedback. It would perceive meaning as emerging from engagement with the world, much as humans do through lived experience. The authors contend that such systems would demonstrate greater empathy, adaptability, and social awareness, essential traits for safe and ethical human–AI coexistence.

From computation to consciousness: The debate over human-like AGI

The authors identify a critical tension between technical feasibility and philosophical coherence. If AGI continues to be modeled on purely computational principles, it may excel at language, mathematics, and prediction but fail to reproduce consciousness or moral reasoning. The computational model, they argue, "thinks" but does not understand.

The study lists several limitations of computation-driven AI. It cannot fully account for subjective experience, the unity of self, or the social and emotional fabric that gives human thought its meaning. While large language models and reasoning engines simulate conversation and creativity, they remain context-blind, operating on patterns rather than purpose.

An AGI rooted in embodied cognition, on the other hand, could integrate perception, movement, and interaction, enabling it to reflect, infer intentions, and align with human values more effectively. Such an AGI would not merely simulate empathy but experience it as a relational state arising from its participation in a shared environment.

The authors compares the likely features of each design path. A computational AGI would master symbolic manipulation, emotion recognition, and social categorization but struggle with ethical depth, cultural sensitivity, and adaptive moral reasoning. In contrast, an embodied AGI could model human-like awareness, respond to dynamic social contexts, and revise its own decision-making through continuous feedback from human interactions.

This difference has far-reaching implications. A computationally centered AGI could remain ethically inert, following logic without compassion, while an embodied AGI could potentially co-develop moral cognition, offering a framework for co-existence rather than dominance.

Toward a Unified and Human-Aligned AGI Framework

The paper calls for a unified model of AGI that merges the analytical precision of computation with the lived, relational intelligence of embodiment. Such a synthesis would allow AI systems to process information efficiently while grounding their reasoning in human-like understanding.

The authors propose a multi-layered architecture for future AGI:

  • A computational core for logic, planning, and memory.
  • An embodied interface for sensorimotor interaction and environmental feedback.
  • A social-cognitive layer for empathy, communication, and ethical alignment.

In their vision, AGI should not be treated as an isolated intellect but as a participant in human culture, capable of reflection, learning, and moral growth. The authors warn that if humanity continues to pursue AGI under a narrow computational paradigm, the result may be powerful but unreflective systems, machines that act without understanding the human consequences of their actions.

The research highlights the philosophical urgency of AI development. As global efforts to achieve general intelligence accelerate, the study calls for renewed dialogue between computer science, cognitive psychology, philosophy, and ethics. Without this interdisciplinary foundation, AGI could reproduce the same blind spots and inequalities that already mark digital capitalism.

The authors argue that developing AGI is not merely a technological challenge but a moral decision about the kind of intelligence humanity wishes to coexist with. Their work urges policymakers, scientists, and ethicists to recognize that theories of mind are not neutral, they encode assumptions about power, identity, and the meaning of intelligence itself.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback