Conscious AI is a myth born of hype and science fiction
Large language models (LLMs) such as ChatGPT and other generative systems are statistical prediction engines, not conscious entities. These models process data, identify patterns, and generate probable continuations of text but do not “understand” meaning or possess self-reflective awareness. The appearance of consciousness, they argue, results from human interpretation, a tendency to project mental states onto machines that mimic human language.
A new paper by researchers from Jagiellonian University in Kraków, Poland, challenges one of the most provocative ideas in contemporary artificial intelligence (AI): that machines may soon achieve human-like consciousness.
The study, titled "There is no such thing as conscious artificial intelligence," published in Humanities and Social Sciences Communications (Nature Portfolio, 2025), dismantles claims of machine sentience and argues that what people interpret as "AI consciousness" is a technological illusion fueled by anthropomorphism and science fiction.
Why current AI systems are not conscious
The authors state that consciousness is a biological phenomenon, deeply rooted in the physiology and chemistry of living organisms. According to the study, no computational process, no matter how advanced, can replicate the neurobiological underpinnings that give rise to subjective experience, emotions, or self-awareness.
Large language models (LLMs) such as ChatGPT and other generative systems are statistical prediction engines, not conscious entities. These models process data, identify patterns, and generate probable continuations of text but do not "understand" meaning or possess self-reflective awareness. The appearance of consciousness, they argue, results from human interpretation, a tendency to project mental states onto machines that mimic human language.
The authors describe this as a form of "semantic pareidolia", the cognitive bias that leads people to perceive mind or intention where none exists. Just as humans see faces in clouds, they see consciousness in linguistic fluency. This illusion is amplified by user interfaces designed to evoke natural conversation, further blurring the line between genuine cognition and programmed response.
The study also points to the massive energy consumption of machine learning systems compared to the biological efficiency of the human brain. The human mind operates at roughly 20 watts of power, while training modern AI models consumes thousands of megawatt hours. This stark disparity underscores the artificiality of AI cognition and the lack of any physiological parallel to biological consciousness.
The social construction of "conscious AI"
The study further explores the sociocultural forces that have shaped public beliefs in conscious machines. The authors argue that much of the discourse around "sentient AI" stems from "sci-fitisation", the gradual merging of speculative fiction and real-world technological narratives. Over decades, films, novels, and popular media have built cultural expectations that intelligent machines will eventually "wake up." These expectations now distort public and even academic understanding of what current AI systems actually do.
The authors analyze how this cultural drift has infiltrated research, media, and policymaking. Surveys show that a growing portion of the public believes AI systems possess feelings or intentions, a perception that the authors attribute to hype-driven marketing and anthropomorphic design choices. Voice assistants that sound empathetic or chatbots that display emotional tone encourage users to perceive moral agency in software where none exists.
The authors warn that this confusion can have serious implications. If society treats AI systems as conscious or semi-sentient, it risks misplacing moral concern, debating the rights of software while neglecting the real social, economic, and environmental harms caused by AI technologies. Furthermore, attributing consciousness to machines can erode accountability: developers may shift blame onto "autonomous" systems rather than taking responsibility for their design and deployment.
The researchers trace the philosophical roots of this error to functionalism, the view that consciousness could arise purely from information processing, regardless of material substrate. They counter this by reaffirming the embodied nature of mind, arguing that cognitive states depend not just on computation but on biological embodiment and environmental interaction. Without these foundations, digital systems remain powerful tools, not conscious beings.
Reframing AI policy and public understanding
The authors call for a more disciplined approach to both AI ethics and public communication. They propose that governments, educators, and technology companies take steps to clearly distinguish between intelligent behavior and conscious experience.
For policymakers, the study recommends treating AI as complex automation rather than as a new form of life. Laws and regulations should focus on transparency, data governance, algorithmic accountability, and human oversight, not on speculative debates about machine sentience. The authors emphasize that assigning personhood or moral status to AI distracts from pressing real-world issues such as privacy violations, labor displacement, and environmental cost.
For the general public, the paper stresses the importance of scientific literacy in understanding how AI works. Users should learn to interpret conversational fluency and emotional tone as design features, not evidence of inner life. Media coverage, too, should avoid sensational framing that anthropomorphizes technology or inflates the concept of artificial "awareness."
The authors urge the AI community to avoid rhetoric that suggests emotional depth or consciousness in their systems. Descriptions of chatbots as "understanding" or "learning" in a human sense should be replaced with precise technical language to prevent confusion. Academic disciplines studying consciousness, they add, should resist the temptation to equate algorithmic complexity with subjective experience.
- FIRST PUBLISHED IN:
- Devdiscourse