Global South faces heightened AI risks amid gaps in education and digital readiness
A new academic review warns that the real challenge may not lie in artificial intelligence itself, but in how unprepared institutions and citizens are to engage with it critically. Researchers argue that the rise of hybrid human–AI societies is exposing deep structural gaps in education, governance, and digital competence, particularly in regions already marked by inequality.
The study, titled "Digital Competence, AI and Sustainable Social Transitions: An Ibero-American Framework for Hybrid Human–AI Societies," published in World, presents a comprehensive conceptual framework for understanding how societies can navigate AI-driven transformation without deepening social exclusion or eroding human agency.
Digital competence emerges as the frontline defense in AI societies
The study identifies digital competence not as a narrow technical skillset, but as a multidimensional construct that integrates algorithmic literacy, critical data awareness, ethical reasoning, and socio-emotional capacities. This broader definition reflects a growing recognition that interacting with AI systems requires more than operational familiarity. It demands the ability to question, interpret, and regulate technology's influence on human decision-making.
Across the literature and policy analysis, the researchers find a consistent pattern. Individuals and institutions with higher levels of critical and ethical digital competence are less vulnerable to AI-related risks such as uncritical reliance on automated systems, exposure to biased algorithms, and loss of autonomy. Conversely, environments where digital competence remains limited to basic technical skills show greater susceptibility to manipulation, dependency, and exclusion.
This relationship highlights a fundamental shift in how digital skills are understood. Earlier frameworks focused on access and usability, but the study argues that such approaches are no longer sufficient in AI-mediated environments. Algorithmic systems now shape access to information, influence behavior, and automate decisions once made by humans. Without the ability to critically engage with these systems, users risk becoming passive participants in processes they neither understand nor control.
The research further emphasizes that digital competence functions as a protective mechanism. It enables individuals to maintain agency, evaluate the reliability of AI outputs, and resist persuasive or manipulative design features embedded in digital platforms. In this sense, digital competence is positioned not just as a tool for productivity, but as a safeguard against systemic risks emerging from widespread AI adoption.
Policy gaps and inequality threaten sustainable AI transitions
While the importance of digital competence is widely acknowledged in policy discourse, the study finds a significant disconnect between rhetoric and implementation. Many international and regional frameworks emphasize employability and technical skills, but give limited attention to ethical governance, algorithmic accountability, or socio-emotional learning.
This imbalance is particularly evident in the Ibero-American context, where rapid digital transformation coexists with persistent structural inequalities. Despite improvements in connectivity, disparities in educational quality, institutional capacity, and access to digital resources continue to shape how AI is adopted and experienced.
The analysis reveals that policy frameworks often prioritize economic competitiveness and labor market readiness, while overlooking the broader societal implications of AI. As a result, educational systems may produce technically skilled users who lack the critical awareness needed to navigate complex AI-driven environments. This creates conditions where technology adoption outpaces the development of governance mechanisms and civic capacity.
The study highlights teacher education as a critical weak point. Educators play a central role in shaping how students interact with digital technologies, yet many training programs remain focused on instrumental uses of technology. Ethical considerations, critical data practices, and AI-related risks are often underrepresented. This gap limits the ability of educators to foster reflective and responsible engagement with AI among learners.
The consequences of these gaps are not evenly distributed. In regions with existing inequalities, AI has the potential to exacerbate disparities by reinforcing unequal access to knowledge, opportunities, and decision-making power. The study warns that without deliberate intervention, AI-driven systems may deepen dependency on external technologies and marginalize communities that lack the resources to engage with them critically.
A new framework links education, ethics, and AI governance
To address these challenges, the researchers propose a new conceptual framework that integrates five core dimensions of digital competence: algorithmic literacy, critical data awareness, AI ethics and governance, human–AI collaboration skills, and civic and socio-emotional capacities.
Each of these dimensions targets specific risks associated with AI adoption. Algorithmic literacy helps users understand how AI systems function and recognize potential biases. Critical data awareness enables individuals to question data sources and understand privacy implications. AI ethics and governance introduce principles such as transparency, accountability, and fairness into decision-making processes.
Human–AI collaboration skills focus on maintaining human oversight and avoiding over-reliance on automated systems. This includes recognizing the limitations of AI and ensuring that human judgment remains central in critical contexts. Civic and socio-emotional capacities extend the framework into the social domain, emphasizing empathy, ethical responsibility, and active participation in digital societies.
The framework is designed to be relational rather than additive. The study argues that these dimensions reinforce one another, creating a holistic approach to digital competence that aligns with sustainable and inclusive AI integration. Rather than treating AI as an external force, the framework positions human agency and education as key levers for shaping technological outcomes.
Importantly, the researchers stress that AI itself is not inherently harmful. The risks associated with AI emerge when educational systems, governance structures, and institutional capacities fail to keep pace with technological change. This perspective shifts the focus from controlling AI to strengthening the human and institutional capabilities needed to guide its development and use.
The study also outlines practical pathways for implementation. These include integrating AI-related topics into curricula, developing interdisciplinary training programs, and establishing institutional policies that promote ethical AI use. In teacher education, the researchers suggest structured modules that combine technical understanding with ethical reflection and collaborative learning.
Toward human-centered AI futures
The study frames digital competence as both an enabling and stabilizing factor. It supports effective engagement with technology while also mitigating risks associated with dependency and uncritical use. This dual role positions education as a central arena for shaping the future of AI-driven societies.
The research also challenges the dominant emphasis on employability in digital skills discourse. While adaptability and technical proficiency remain important, the study argues that they must be complemented by ethical and civic capacities. Without this balance, digital competence risks being reduced to a tool for economic productivity, rather than a foundation for responsible participation in digital societies.
Looking ahead, the authors call for more empirical research to test and refine the proposed framework. They also call for intervention-based studies that examine how digital competence can be developed over time and how it influences behavior in real-world contexts.
- FIRST PUBLISHED IN:
- Devdiscourse