Digital learning boom amplifying online harassment risks in emerging nations


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-02-2026 19:07 IST | Created: 24-02-2026 19:07 IST
Digital learning boom amplifying online harassment risks in emerging nations
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) is rapidly transforming university education across emerging economies, but new evidence suggests that digital expansion is carrying unintended consequences. Students are facing rising exposure to online harassment embedded within algorithmically structured interaction spaces.

In the study Reshaping Digital Social Reality in the AI Era: A Data-Driven Analysis of University Students' Exposure to Digital Harassment in Emerging Countries, published in Societies, the author presents cross-national data showing that AI-mediated interaction, social media intensity, and digital identity visibility significantly increase harassment exposure, with direct effects on student mental health and e-learning continuity.

AI-mediated interactions and the architecture of exposure

Digital harassment in AI-driven educational settings is not simply a matter of individual misconduct. Instead, it is embedded within the architecture of digital interaction itself. The author applies an expanded Unified Theory of Acceptance and Use of Technology framework to examine how structural and behavioral factors shape exposure.

The strongest predictor of harassment exposure identified in the analysis is AI-mediated interaction. Students who engage more frequently with algorithmically structured systems such as automated discussion forums, AI-supported messaging environments, and digitally curated social learning spaces face significantly higher exposure levels. These environments amplify visibility, accelerate interaction cycles, and reduce friction in communication, conditions that can enable harassment to spread quickly and persistently.

Social media engagement intensity also plays a major role. Students who spend more time interacting on digital platforms, whether for academic collaboration or informal communication, show a marked increase in harassment exposure. The study suggests that as educational and social platforms converge, boundaries between academic and personal spaces blur, increasing contact opportunities and prolonging conflict dynamics.

Digital identity visibility emerges as another significant factor. Students who maintain highly visible online profiles, whether through academic sharing, public discussion participation, or social media presence, face greater exposure. Increased visibility may enhance academic networking and collaboration, but it also broadens the audience reach of harmful interactions.

Cultural norms and social expectations further shape exposure patterns. In certain contexts, gender norms, social hierarchies, and expectations regarding online conduct influence both the likelihood of harassment and the way it is perceived or reported. The study finds that cultural context significantly moderates the relationship between AI-mediated interaction and harassment exposure, indicating that technological architecture interacts with local social structures rather than operating in isolation.

On the other hand, technological literacy and cybersecurity awareness act as protective factors. Students with stronger digital skills and awareness of privacy controls, reporting tools, and online risk mitigation strategies experience significantly lower exposure levels. The results suggest that digital competence does not eliminate harassment risk but can meaningfully reduce vulnerability.

Mental health and academic continuity under pressure

The study investigates downstream consequences. The results show a clear negative association between digital harassment exposure and students' mental health. Increased exposure correlates with heightened stress, anxiety, and psychological strain, reinforcing concerns that online hostility can carry lasting emotional consequences.

The academic impact is equally concerning. Higher exposure levels are linked to reduced e-learning continuity. Students who experience digital harassment are more likely to disengage from online platforms, reduce participation in virtual discussions, and withdraw from collaborative learning activities. In digitally dependent educational environments, this withdrawal can directly affect academic performance and long-term educational attainment.

The study's moderation analysis reveals additional nuance. Academic specialization significantly influences how social media engagement and digital identity visibility translate into exposure. Students in certain fields, particularly those with higher online interaction demands, may face elevated risks. However, technological literacy does not vary significantly in its protective effect across academic disciplines, suggesting that digital skills training could serve as a broadly applicable intervention.

Cultural context also shapes how AI-mediated interaction translates into exposure, underscoring that technology-driven risks are embedded within social realities. The findings indicate that harassment dynamics differ across national and regional environments, influenced by legal frameworks, digital norms, and institutional responses.

By linking exposure not only to psychological well-being but also to educational continuity, the research positions digital harassment as a systemic educational challenge rather than an isolated behavioral issue. In AI-driven academic ecosystems, harassment can disrupt learning pathways at scale.

Governance, policy, and the future of AI-driven education

The findings suggest that addressing digital harassment requires more than reactive reporting systems or disciplinary procedures. Instead, institutions must consider how platform design, algorithmic moderation, and digital literacy education interact to shape risk.

AI-mediated platforms are not neutral conduits. Their design influences who becomes visible, how quickly content spreads, and how long interactions persist. Governance strategies must therefore incorporate structural safeguards, including stronger moderation protocols, adaptive reporting mechanisms, and transparent algorithmic policies.

The protective role of technological literacy highlights the importance of cybersecurity education as part of university curricula. Training students in privacy management, digital boundary setting, and risk recognition may significantly reduce exposure levels. However, the research suggests that literacy alone cannot offset structural vulnerabilities embedded within AI systems.

The study also calls for culturally responsive governance. Harassment dynamics vary across social contexts, and interventions effective in one region may not translate seamlessly to another. Policymakers must account for local norms while upholding universal principles of student safety and academic integrity.

Importantly, the research frames digital harassment as part of a broader transformation of social reality in the AI era. As universities integrate AI into teaching, evaluation, and collaboration, they reshape the social fabric of learning environments. Interaction is increasingly mediated by algorithms that determine visibility, prioritization, and reach. In such settings, harassment can become amplified not only by human intent but by platform dynamics.

The study does not argue against digital transformation. Instead, it calls for proactive governance that aligns technological advancement with student well-being.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback