Degrees without thinking? AI is decoupling knowledge from performance
A study by Matthew Montebello of the University of Malta argues that generative AI is fundamentally breaking the long-standing link between assessment performance and genuine learning in higher education, forcing educators to rethink the foundations of teaching and evaluation.
The study, titled "From Cognitive Necessity to Cognitive Choice: Higher Education Assessment and Learning in the Age of Generative AI," published in AI Educ., analyses how generative AI tools are redefining the relationship between cognition, assessment, and learning processes in universities worldwide. It introduces a central concept, the "cognitive engagement gap", to describe a growing disconnect between what students produce and what they actually understand.
Generative AI breaks the link between assessment and learning
GenAI represents a structural shift in education, not just a technological enhancement. Historically, assessment in higher education served a dual role: it measured learning outcomes and simultaneously enforced cognitive engagement. Students had to think, analyze, and synthesize information in order to complete assignments, even when they adopted surface-level strategies.
This implicit requirement for cognitive effort ensured that assessment performance generally reflected at least some degree of learning. Essays, exams, and projects were designed in ways that made it difficult to produce acceptable outputs without engaging in meaningful intellectual work.
Generative AI disrupts this model by enabling students to produce complete academic outputs without necessarily performing the underlying cognitive processes. Large language models can generate essays, arguments, and structured responses that meet academic standards of coherence and fluency, effectively decoupling performance from cognition.
The study argues that this change marks a transition from what it defines as cognitive necessity to cognitive choice. In pre-AI environments, cognitive engagement was unavoidable for successful task completion. In AI-mediated environments, it becomes optional, depending on whether the learner chooses to engage deeply or rely on automated outputs.
This shift introduces a fundamental problem for higher education: assessment can no longer be assumed to serve as a reliable proxy for learning. The traditional assumption that completing an assignment demonstrates understanding is no longer valid when the cognitive work can be outsourced to AI systems.
The issue is not simply whether students misuse AI, but that the structure of learning itself has changed. Even well-intentioned students may rely on AI for efficiency, reducing opportunities for cognitive engagement.
The 'cognitive engagement gap' exposes hidden weaknesses in education systems
Cognitive engagement gap is defined as the growing separation between assessment performance and the cognitive processes associated with learning. This gap represents a structural condition in which students can meet academic requirements without necessarily engaging in comprehension, analysis, or reflection.
The study argues that this gap is not entirely new but has been amplified by generative AI. Even before AI, researchers had identified limitations in assessment systems, including tendencies toward surface learning and performance-oriented behavior. Generative AI accelerates these issues by removing the need for productive struggle, a key component of learning.
One of the most significant consequences of the cognitive engagement gap is the emergence of what the study describes as simulated understanding. AI-generated outputs can appear highly sophisticated, creating the illusion that a student has mastered the material. In reality, the student may have limited or no understanding of the concepts presented.
This phenomenon has serious consequences for education, particularly in disciplines where knowledge builds cumulatively or where professional competence depends on deep understanding. Students may progress through courses while lacking the foundational knowledge required for advanced learning or real-world application.
The study also highlights the limitations of current institutional responses, particularly the reliance on AI detection tools and enforcement measures. These approaches are described as both technically unreliable and pedagogically insufficient. Detection systems can produce false positives and are easily circumvented, raising concerns about fairness and trust.
More importantly, enforcement does not address the underlying issue. Even if AI use could be perfectly detected, the cognitive engagement gap would remain because it is rooted in how assessment tasks are designed. The problem is not simply that students use AI, but that assessment systems no longer require them to demonstrate cognitive processes.
The research further identifies equity risks associated with this shift. Students with strong self-regulation skills and prior academic preparation are more likely to use AI as a tool for learning, critically engaging with its outputs. In contrast, less prepared students may rely on AI as a substitute for engagement, widening existing inequalities in educational outcomes.
New assessment models must prioritize cognition over performance
In response to these challenges, the study calls for a fundamental redesign of assessment practices, shifting the focus from evaluating final outputs to evidencing cognitive processes. This requires moving away from product-oriented assessment models toward approaches that make thinking visible.
The research proposes a framework known as Cognitive Engagement-Centred Assessment, which emphasizes the need to explicitly design tasks that require and reveal cognitive effort. Rather than assuming that learning occurs through task completion, educators must ensure that assessment structures actively elicit reasoning, reflection, and metacognitive awareness.
This approach involves integrating process-based elements into assessment, such as iterative drafts, reflective explanations, and opportunities for students to articulate their reasoning. By focusing on how students arrive at answers rather than just the answers themselves, these methods aim to restore the link between assessment and learning.
The study also highlights the importance of metacognition in AI-mediated environments. Students must develop the ability to evaluate AI-generated outputs, recognize their limitations, and make informed decisions about how to use them. This requires explicit teaching of self-regulation skills, which cannot be assumed to develop automatically.
Another key recommendation is the diversification of assessment formats. Traditional written assignments are particularly vulnerable to AI substitution, while formats such as oral assessments, real-time problem-solving tasks, and interactive activities provide more direct evidence of understanding.
Importantly, assessment design should be robust regardless of whether AI is used. This principle, described as AI-agnostic robustness, shifts the focus away from controlling technology use and toward ensuring that learning outcomes are achieved under any conditions.
The research also calls for a shift from surveillance-based approaches to design-based solutions. Institutions should create learning environments where meaningful engagement is necessary for success. This requires investment in educator training, as well as institutional support for innovation in teaching practices.
- FIRST PUBLISHED IN:
- Devdiscourse