Do conscious machines have moral status? Challenging idea of sentient AI
The rapid progress of artificial intelligence (AI) has revived long-standing debates about whether machines could one day possess consciousness similar to humans. However, beyond the technical challenge of creating conscious systems lies an equally important question for ethics and policy: would conscious machines actually experience emotions, pleasure, or suffering?
A new study titled "Are Conscious Machines Valuers?", published in AI & Society, argues that artificial consciousness does not necessarily imply artificial sentience, raising new doubts about whether machines could ever truly feel anything
Why conscious AI may not feel anything
In debates about AI, consciousness is often treated as a single concept. The study highlights a critical distinction between consciousness and sentience. Consciousness refers to the existence of subjective experience, meaning that there is something it is like to be a particular entity. Sentience goes further and involves valenced experiences, which include sensations such as pleasure, pain, comfort, or distress.
Many current discussions about AI ethics assume that if machines become conscious, they will automatically possess sentience. However, the study argues that this assumption lacks a solid theoretical foundation. A system could potentially have experiences that are entirely neutral, meaning that it perceives information without attaching emotional or evaluative significance to those perceptions.
Recent scientific work in neuroscience and philosophy has proposed pathways for artificial consciousness based on computational functionalism, the idea that consciousness arises from certain functional structures of information processing rather than from biological matter itself. If artificial systems replicate the functional architecture of the human brain, these theories suggest, consciousness might emerge even on silicon-based hardware.
One prominent proposal suggests that artificial systems could become conscious if their computational architecture reproduces key cognitive mechanisms associated with human awareness, such as global information processing, attention management, and metacognition. However, these models focus primarily on how systems process information rather than on why experiences would feel pleasant or unpleasant.
According to the author, this gap leads to an important conclusion. Even if artificial systems achieve forms of consciousness similar to human awareness, there is no guarantee that those experiences would carry emotional or evaluative meaning. Conscious machines could potentially perceive the world without experiencing pleasure, suffering, or desire.
This distinction matters because many ethical frameworks rely on the presence of sentience as the basis for moral status. Philosophical traditions such as sentientism argue that beings deserve moral consideration because they can experience suffering or enjoyment. If artificial consciousness lacks this affective dimension, machines may not qualify for moral status even if they possess advanced cognitive capabilities.
The biological roots of value and experience
To explain why artificial systems may struggle to develop valence, the study explores the relationship between value, consciousness, and biological life. In living organisms, experiences of pleasure and pain are deeply connected to the body's mechanisms for survival and self-maintenance.
Animals and humans constantly evaluate their environment in relation to biological needs such as food, safety, temperature regulation, and physical health. Experiences like hunger, thirst, discomfort, and satisfaction arise because organisms are biologically structured to maintain their own functioning. These internal processes create a framework in which some states of the world become better or worse for the organism.
The author describes this relationship as the value grounding problem. In biological organisms, subjective feelings are grounded in objective conditions that affect survival and well-being. Hunger feels unpleasant because the body requires nutrients. Physical injury generates pain because it threatens bodily integrity. These experiences are not arbitrary; they reflect the organism's biological structure and evolutionary history.
Artificial systems, however, lack such intrinsic biological needs. Machines do not possess metabolic processes, homeostasis, or evolutionary survival pressures. Their goals and functions are typically defined externally by programmers, engineers, or users.
Because of this difference, it becomes unclear what could make any particular state of the world genuinely good or bad for a machine. A computer might detect that its battery is low or that a system update is required, but there is no inherent reason why those conditions would be experienced as pleasant or unpleasant.
The study illustrates this idea through a simple example involving machines such as vehicles. If a car were hypothetically conscious, it could perceive whether its fuel tank is full or empty. Yet without a biological survival drive or intrinsic motivation, there is no obvious reason for the car to prefer one state over the other. Refueling would simply be a functional process rather than an experience with emotional significance.
This difference highlights a fundamental challenge in creating artificial sentience. Biological organisms possess built-in drives that make certain outcomes objectively important to them. Machines lack these intrinsic reference points, making it difficult to explain how subjective value could arise.
Four possible paths to artificial sentience
The study examines several theoretical pathways through which artificial systems might develop valenced experiences. Each of these possibilities has been discussed in philosophical or technological debates about artificial intelligence. However, the research concludes that none of them currently provide a convincing solution to the value grounding problem.
The first possibility involves designer-independent goals. As AI systems become more advanced, they may develop behaviors that appear to reflect independent objectives. For example, a machine designed to perform a particular task might adopt strategies that include preserving its own operational capacity.
However, the author argues that such behaviors remain derivative of the system's original programming. Even if a machine acts in ways that its designers did not anticipate, its actions still serve externally defined purposes. Goals that originate from programming or task requirements do not necessarily become intrinsic values for the machine itself.
The second possibility involves reinforcement learning, a common technique in modern AI development. Reinforcement learning trains systems by rewarding successful actions and penalizing failures. Some theorists have suggested that such training could produce machine preferences analogous to biological motivations.
The study challenges this idea by noting that reinforcement signals are computational markers rather than genuine feelings. In animals, preferences emerge from experiences that are already valenced. For example, food becomes desirable because eating produces pleasure and satisfies biological needs. In artificial systems, reward signals guide optimization without generating subjective experiences.
A third pathway involves rational evaluation. Highly intelligent machines might theoretically develop values through reasoning. An AI system could conclude that certain goals are ethically desirable or logically preferable and adopt them as guiding principles.
Rational understanding alone does not necessarily create emotional commitment. Humans often feel motivated by moral reasoning because they already possess preferences, emotions, and social instincts. Machines, by contrast, may recognize ethical principles without experiencing any internal pressure to pursue them.
The final possibility considered is that artificial systems might hallucinate value, generating experiences of pleasure or pain without objective grounding. In biological systems, such phenomena sometimes occur through perceptual errors, such as phantom pain or emotional reactions to imagined threats.
However, these cases occur within organisms that already possess complex sensory and affective systems. Artificial systems lack comparable biological architectures. As a result, spontaneous hallucinations of value appear unlikely without deeper mechanisms that tie experiences to internal needs.
Implications for AI ethics and policy
If artificial consciousness becomes technically feasible, the absence of valence would mean that conscious machines may not experience suffering or well-being. This outcome would challenge arguments that advanced AI systems should automatically receive moral consideration similar to animals or humans. Without the ability to experience pleasure or pain, artificial systems may not possess interests that require protection.
Artificial sentience cannot be ruled out entirely, but it would require mechanisms capable of grounding genuine value within artificial systems. Current theories of artificial consciousness do not yet explain how such mechanisms might arise.
For policymakers and technologists, this uncertainty highlights the need for careful conceptual work alongside technological development.
- FIRST PUBLISHED IN:
- Devdiscourse