AI produces ‘third-order knowledge’ but lacks true intentionality
Large language models (LLMs) now generate research summaries, medical explanations, legal drafts and classroom materials with striking fluency, often blurring the line between machine output and human reasoning. However, scholars are confronting a foundational issue that goes beyond performance metrics: can generative AI be treated as a thinking subject, or does it remain a sophisticated computational tool?
The study "Can generative artificial intelligence be considered a cognitive subject? An analytic analysis," published in AI & Society, introduces a strict conceptual framework to test whether current AI systems satisfy the philosophical conditions necessary for cognitive subjecthood
Defining cognitive subjecthood in the age of generative AI
Generative AI systems clearly contribute to knowledge production. They synthesize vast amounts of data, reorganize information and generate structured responses that appear reasoned and coherent. But being a contributor to cognition is not the same as being a bearer of cognitive states.
To prevent conceptual confusion, the study proposes a demanding framework of necessary and sufficient conditions for cognitive subjecthood. According to this framework, a system qualifies as a cognitive subject only if it simultaneously satisfies five conditions: information competence, reasons-responsiveness, metacognitive self-representation, consciousness capacity and robust intentionality. All five must be met together.
- Information competence refers to the ability to acquire, store and update representations that track relevant features of the environment and use them across contexts.
- Reasons-responsiveness requires more than producing justification-like language; it involves genuine sensitivity to evidential relations and the capacity to revise claims in light of counterevidence.
- Metacognitive self-representation demands that a system monitor and regulate its own cognitive states in a functionally integrated way.
- Consciousness capacity involves properties associated with subjective experience or theoretically grounded indicators of awareness.
- Robust intentionality concerns intrinsic aboutness, meaning that representational states are genuinely directed toward objects or states of affairs rather than merely interpreted as such by observers.
This analytic definition sets a high threshold. It is designed to ensure that claims about AI subjecthood are philosophically meaningful rather than rhetorical shortcuts based on conversational fluency.
What generative AI can and cannot do
The study acknowledges that contemporary LLMs satisfy at least one condition in a straightforward operational sense. They display strong information competence. Trained on massive datasets, they can recombine linguistic patterns, perform translation and summarization, answer complex questions and generate code. Their outputs often exhibit context sensitivity and structural coherence.
However, the authors caution that information competence does not equate to truth-tracking or epistemic reliability. Generative AI systems are known to produce hallucinations, fabricate references and present false claims with confidence. Fluency can mask systematic error. The appearance of structured reasoning does not guarantee alignment with verifiable evidence.
When assessing reasons-responsiveness, the authors argue that generative AI exhibits only partial and derivative forms. Systems can generate multi-step explanations and appear to weigh evidence, especially when guided by specific prompting strategies or reinforcement learning from human feedback. Yet these behaviors are largely engineered through optimization for helpfulness, coherence and user satisfaction rather than intrinsic responsiveness to reasons. The system may generate reasons-shaped language without being normatively constrained by evidence in the way human epistemic agents are.
Metacognition presents another challenge. Large language models can produce statements about their own uncertainty or limitations. They can be prompted to self-critique and revise outputs. But these behaviors are typically layered mechanisms imposed through design choices rather than manifestations of a unified internal self-model. Generated disclaimers or confidence estimates do not necessarily reflect genuine self-monitoring that regulates future cognition. According to the study, robust metacognitive self-representation remains unestablished in current systems.
The question of consciousness is treated with similar caution. The authors review contemporary scientific and philosophical approaches that attempt to identify indicators of consciousness in artificial systems. They find no compelling evidence that current generative AI systems instantiate properties associated with subjective experience or perspective-bearing architectures. While future systems could alter the evidential landscape, present-day large language models do not meet the burden of proof required to attribute consciousness capacity.
Intentionality, the fifth condition, raises perhaps the most intuitive temptation. Users commonly describe AI systems as believing, intending or understanding. The authors note that such language may function as a useful predictive shorthand. However, robust intentionality requires intrinsic aboutness rather than ascribed meaning derived from training data, prompts and user interpretation.
Even when models perform well on theory-of-mind tasks or track belief-like patterns in text, this does not establish that they possess stable, integrated goals or mental states of their own. According to the analysis, current generative AI systems exhibit derivative goal-directedness supplied by designers and users rather than originating internally.
Third-order knowledge and the governance question
The authors discuss generative AI's role in what has been described as third-order knowledge production.
- First-order knowledge involves original research and direct investigation.
- Second-order knowledge consists of interpretation, synthesis and pedagogy, such as reviews and textbooks.
- Third-order knowledge refers to hybrid outputs produced through interaction between user prompts, training data and model architecture.
These outputs are not purely original nor merely derivative commentary; they are reconstructed artifacts shaped by distributed informational infrastructures.
The rise of third-order knowledge explains why generative AI appears increasingly authoritative. Its outputs circulate in the same linguistic register as human testimony. They are embedded in search engines, research tools and communication platforms. As a result, algorithmic outputs may gain credibility through interface position and omnipresence rather than through accountable expertise.
Yet the study argues that epistemic influence does not imply epistemic agency. Microscopes and simulation tools have long shaped scientific discovery without being treated as cognitive subjects. What distinguishes generative AI is that its outputs resemble assertions. This resemblance can create what the authors describe as a temptation of subjecthood. Systems that speak in human-like language are easily perceived as interlocutors rather than tools.
The formal argument reconstructed in the paper reinforces this conclusion. Since cognitive subjecthood requires the conjunction of all five demanding conditions, and since consciousness, robust intentionality and strong metacognition are not established for current generative AI systems, the conclusion follows that they do not qualify as cognitive subjects under the analytic definition proposed.
This does not deny the transformative impact of generative AI. On the contrary, it underscores the urgency of governance and accountability frameworks. If AI systems are not cognitive subjects, they cannot bear epistemic duties or moral blame. Responsibility must be distributed across developers, deployers, institutions and users.
The study connects this conceptual clarification to practical regulatory debates. Transparency, auditing and documentation become central because there is no AI subject to hold accountable in isolation. Risk-based regulatory approaches and internal algorithmic auditing are presented as mechanisms that compensate for the absence of subject-level responsibility.
The authors also highlight the risks of misinformation, bias and normalization of unverified claims. Generative AI systems can generate confident falsehoods without sincerity or epistemic conscientiousness. If such outputs become embedded in knowledge infrastructures without adequate oversight, they may reshape standards of credibility and verification.
- FIRST PUBLISHED IN:
- Devdiscourse