Human oversight and AI literacy key to responsible AI integration in education


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 26-02-2026 19:06 IST | Created: 26-02-2026 19:06 IST
Human oversight and AI literacy key to responsible AI integration in education
Representative Image. Credit: ChatGPT

A new international research report warns that the future of education in the artificial intelligence (AI) era will depend on whether human agency remains at the center of that transformation. Submitted on arXiv, the study examines the growing tension between generative AI adoption and the preservation of autonomy in teaching and learning.

The report, titled Protecting and Promoting Human Agency in Education in the Age of Artificial Intelligence, lays out a comprehensive framework for integrating AI into education without eroding the intentionality, judgment, and decision-making power of teachers and learners.

Reframing human agency in the AI classroom

Human agency in education is defined in the report as the capacity to act intentionally, make informed choices, exercise meaningful control, and influence outcomes. The authors argue that this capacity is foundational not only to effective learning but also to the ethical purpose of education itself. Schools and universities are not merely information delivery systems. They are human institutions designed to foster empathy, critical thinking, and intercultural understanding. Any technology that reshapes these environments must therefore be evaluated not only for efficiency but also for its impact on autonomy and cognitive development.

The report identifies four interlocking sources of agency that can either be strengthened or weakened by AI systems: human oversight, human–AI complementarity, AI competencies, and emergent relational design.

Human oversight refers to maintaining meaningful control over AI systems. As generative AI tools become more autonomous, the study stresses the importance of bounding their operational domains, ensuring compatibility between human and machine representations of tasks, and clearly linking AI outputs to human decisions. Oversight is not simply technical supervision. It is moral and pedagogical stewardship.

However, the researchers caution that oversight is not a universal solution. Institutional pressures for efficiency and cost reduction may incentivize full automation, reducing opportunities for human judgment. Moreover, monitoring AI systems often requires extensive data collection, raising concerns about privacy and the commercialization of student data. Oversight must therefore be balanced with safeguards that prevent excessive surveillance and preserve trust.

Human–AI complementarity forms the second pillar of the framework. Rather than positioning AI as a replacement for educators, the study advocates for hybrid adaptivity models in which humans and AI systems co-construct decisions across perception, interpretation, and action. In this model, AI handles routine or data-intensive tasks, while teachers and learners retain authority over complex reasoning, ethical deliberation, and emotional engagement.

The authors note that complementarity is not automatic. Poorly designed systems risk displacing teacher expertise or encouraging students to outsource metacognitive processes. Intelligent tutoring systems, for example, can support learners but may reduce teacher agency if educators are excluded from feedback loops. Delegating lower-level tasks to AI while preserving human control over higher-order decisions is presented as a more sustainable path.

The report also examines the cognitive implications of AI integration. Drawing on dual-process theories of reasoning, the authors explore whether generative AI functions as a new cognitive layer, sometimes described as System 0. While AI can extend human capacity by processing information rapidly, it may also promote uncritical acceptance of machine-generated outputs. Efficiency gains must not come at the expense of reflective thinking.

Building AI competencies to preserve autonomy

The third major source of agency focuses on AI competencies. The study argues that protecting human agency requires equipping learners and educators with the knowledge and dispositions necessary to engage critically with AI systems. Transparency alone is insufficient. Even explainable AI systems can remain opaque to users without adequate literacy.

AI competence extends beyond understanding how algorithms function. It includes knowing when to trust AI outputs, when to question them, and when to override them entirely. Critical thinking, self-regulated learning, creative problem-solving, and strategic decision-making are identified as essential competencies in AI-mediated environments.

The authors warn of cognitive offloading and what some scholars describe as metacognitive laziness. The ease of generative AI may reduce mental effort and compromise depth in scientific inquiry and writing. Students may achieve performance gains without authentic learning. Educators may adopt AI-generated lesson plans without engaging in reflective adaptation. In both cases, agency is diminished when individuals relinquish intentional engagement.

Motivation emerges as a crucial factor. Research cited in the report indicates that agency-supportive AI tools are most effective for educators who are already motivated to improve their practice. This suggests that competency development must be paired with incentives and institutional cultures that value critical engagement rather than passive consumption of AI outputs.

The workshop participants also debated whether education systems should prioritize developing resilience in ambiguity over demanding fully explainable AI systems. Given the complexity of large language models, perfect transparency may be unattainable. A dual strategy emerges as the likely path forward: improving explainability where possible while simultaneously educating users to navigate inherent opacity.

In this context, agency is framed as active negotiation. Teachers and learners must remain capable of interpreting, adapting, and contesting AI outputs. The report underscores the need for safe, low-stakes environments where stakeholders can practice these skills before AI becomes deeply embedded in high-stakes assessments and professional evaluations.

Navigating ethical dilemmas and systemic risks

The study examines how agency emerges relationally within sociotechnical systems. Agency is not located solely within individuals but distributed across institutions, technologies, and social networks. The authors draw on postdigital and sociocultural theories to argue that AI integration must be co-designed across these layers.

Systemic concerns loom large. The report warns that AI systems could subtly homogenize cognitive and social experiences, narrowing diversity in thinking and interaction. Academic integrity boundaries are increasingly blurred as generative AI produces sophisticated outputs. Institutions face difficult questions about authorship, originality, and acceptable AI use.

Another unresolved issue involves the redistribution of time saved through automation. If AI generates feedback or grades assignments, educators could invest that time in deeper mentoring and relational engagement. Alternatively, institutions might redirect it toward administrative tasks, reducing opportunities for meaningful interaction. Automation does not inherently diminish agency; its impact depends on how human labor is reallocated.

Three core dilemmas frame the policy implications of AI integration. The first concerns normative constraints. Should AI systems be explicitly designed to optimize human agency? If so, whose values define those constraints? Rigid standards risk standardizing education in ways that limit diversity and deepen inequities. Yet the absence of norms leaves agency vulnerable to commercial interests.

The second dilemma addresses transparency. Should stakeholders demand inherently explainable systems, or should education focus on helping people cope with black-box technologies? The report concludes that both approaches are necessary. Transparency must be purpose-driven and tailored to context, while users must develop critical literacy to maintain informed oversight.

The third dilemma explores AI as System 0. Outsourcing memory, pattern recognition, and creative drafting to AI reshapes human cognition. While this can free capacity for higher-order reasoning, it may also increase vulnerability to algorithmic bias and manipulation. Sustaining democratic values and intellectual autonomy requires deliberate cultivation of reflective engagement.

A call for interdisciplinary action

According to the study, protecting human agency in the age of AI demands collaboration across research, policy, and practice. Longitudinal and context-specific studies are needed to identify effective human–AI configurations. Policymakers must craft regulatory frameworks that safeguard autonomy without stifling innovation. Developers and educators must co-design tools that reinforce professional judgment and student creativity.

The report also highlights the importance of distributed agency, extending beyond teachers and students to include parents, peers, and institutions. AI adoption reshapes power dynamics and responsibilities across educational ecosystems. Without careful design, it risks entrenching inequities or narrowing pedagogical diversity.

The study recognizes that AI can enhance agency when used to strengthen feedback systems, personalize instruction, and redistribute time toward meaningful engagement. Automation can increase human agency if it enables individuals to focus on tasks requiring empathy, creativity, and ethical reasoning.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback