Inequality and bias threaten education goals as AI policies remain underdeveloped

Inequality and bias threaten education goals as AI policies remain underdeveloped
Representative image. Credit: ChatGPT

A new systematic review published in Sustainability highlights how universities, governments, and international organizations are grappling with the disruptive force of AI, revealing a fragmented and reactive policy landscape that risks widening inequalities and undermining academic integrity.

Authored by Iryna Kushnir of Nottingham Trent University and Marcellus Forh Mbah of the University of Manchester, the study draws attention to the urgent need for coherent governance frameworks as AI becomes embedded across teaching, research, and administration. The findings suggest that while AI adoption is accelerating globally, policy responses remain uneven, underdeveloped, and often improvised.

The study, titled "Policy Controversies Over AI Applications in Higher Education Within the Framework of Sustainable Development Goal 4," highlights how limited the evidence base remains despite the scale of transformation underway.

AI adoption outpaces governance, exposing risks to integrity and equity

The review finds that AI integration in higher education has expanded across multiple domains, including personalized learning, predictive analytics, automated assessment, and administrative efficiency. However, this rapid adoption has introduced a cluster of policy challenges that institutions are struggling to address in a coordinated manner.

One of the key concerns is academic integrity. The study documents widespread student use of generative AI tools, with some institutions reporting that up to 80 percent of students have used AI for learning purposes. While such tools can enhance productivity, they are also being used to bypass learning processes, raising concerns about plagiarism, authorship, and the erosion of critical thinking skills. Weak assessment design has further compounded the issue, leaving universities vulnerable to misuse.

Equity concerns present another major fault line. The research highlights stark disparities in AI access and readiness between institutions in the Global North and Global South, as well as within countries across socioeconomic lines. These inequalities risk reinforcing existing educational divides, directly conflicting with the United Nations' Sustainable Development Goal 4, which emphasizes inclusive and equitable quality education. The study also flags algorithmic bias as a systemic risk, particularly in high-stakes processes such as admissions and grading, where biased AI systems could have long-term consequences for students.

Accuracy and reliability of AI-generated content emerge as additional concerns. The increasing reliance on AI for generating summaries, feedback, and even academic references raises questions about the trustworthiness of outputs, especially when systems produce plausible but incorrect information. This issue is closely tied to data privacy and security risks, as students and educators may unknowingly input sensitive or confidential data into AI platforms without adequate safeguards.

These risks are not theoretical but already shaping institutional responses. Universities have oscillated between restrictive measures, including outright bans on tools like ChatGPT, and more adaptive approaches that acknowledge the inevitability of AI integration. However, bans have proven largely ineffective due to the proliferation of alternative tools and the difficulty of enforcement, pushing institutions toward more nuanced strategies.

Fragmented policy responses reveal reactive and layered governance

AI policy development in higher education is characterized by what researchers describe as "layering." Rather than introducing comprehensive reforms, institutions are incrementally adding new rules, guidelines, and practices onto existing frameworks. This results in a patchwork of policies that vary widely across institutions and regions.

The review identifies several dominant policy approaches. Early responses were largely reactive, driven by concerns over misuse in student assignments. By mid-2023, fewer than one-third of the world's top 500 universities had formal AI policies, and many of those relied on restrictive measures such as bans. Over time, however, there has been a shift toward acceptance and adaptation, with institutions adopting more open but cautious approaches.

Policy development is increasingly shaped by what the study terms a "heterarchical" system, involving multiple actors across different levels. International organizations such as UNESCO, OECD, and the European Commission have issued broad ethical guidelines, while national governments have introduced policy frameworks tailored to their education systems. Universities themselves remain the most active actors, developing institutional policies, training programs, and curriculum adjustments.

This multi-layered governance structure allows for flexibility but also creates inconsistencies. Students may encounter different AI policies across courses within the same institution, leading to confusion and uneven learning experiences. The lack of standardization is further compounded by the varying pace of policy development, with some institutions advancing rapidly while others lag behind.

The study also highlights the dual use of "hard" and "soft" policy tools. Hard measures include detection systems and formal regulations, while soft approaches focus on guidance, dialogue, and education. Universities are increasingly investing in AI literacy initiatives, offering workshops, resources, and training to help students and staff navigate the ethical and practical dimensions of AI use. At the same time, curriculum redesign is emerging as a key strategy, with institutions exploring new forms of assessment that are less susceptible to AI misuse.

Despite these efforts, the effectiveness of current policies remains unclear. The review notes a lack of empirical evidence on implementation, compliance, and outcomes, reflecting the early stage of policy development in this field. This gap limits the ability to assess what works and what does not, leaving institutions to experiment with different approaches in real time.

Calls grow for coordinated, inclusive, and future-ready AI governance

Furthermore, the study outlines a set of policy priorities that are gaining traction across the literature. A central recommendation is the need for clearer and more context-specific institutional guidance. Generic policies are seen as insufficient, with stakeholders requiring tailored frameworks that reflect disciplinary differences and varied uses of AI.

Curriculum and assessment reform is identified as another critical area. Rather than treating AI solely as a threat, the study argues for its integration into teaching and learning processes in ways that promote critical engagement and ethical use. This includes redesigning assessments to evaluate higher-order thinking skills that cannot be easily replicated by AI.

The importance of collaborative governance is also emphasized. Effective AI policy requires coordination among universities, governments, international organizations, and industry actors, as well as meaningful input from students and educators. However, the study acknowledges an ongoing tension between the need for consistency and the need for flexibility, suggesting that future governance models will need to balance these competing demands.

Equity and inclusion remain underdeveloped areas in current policy frameworks. While widely recognized as important, these issues receive less attention than academic integrity and assessment. The study calls for more explicit consideration of vulnerable groups, including non-native English speakers and students with limited access to digital resources, to ensure that AI does not exacerbate existing inequalities.

  • FIRST PUBLISHED IN:
  • Devdiscourse

TRENDING

OPINION / BLOG / INTERVIEW

AI and SDG 12: Why data-driven marketing alone cannot ensure responsible consumption

AI creates both inclusion and exclusion in labor markets

AI can’t deliver climate gains without strong governance and capacity building

Sustainable consumption trends are reshaping global food supply chains

DevShots

Latest News

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback