AI-savvy but not AI-safe? Digital behavior gap among students


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 19-02-2026 12:42 IST | Created: 19-02-2026 12:42 IST
AI-savvy but not AI-safe? Digital behavior gap among students
Representative Image. Credit: ChatGPT

A new study raises fresh concerns about how well communication students are prepared for the ethical pressures of the generative AI era. While many undergraduates show strong command of digital tools and cybersecurity principles in structured academic settings, researchers are finding that this apparent confidence may not extend to their everyday online behavior.

These findings are detailed in Dissonance in the Algorithmic Era: Evaluating Showcase Digital Competence and Ethical Resilience in Communication Training, published in Journalism and Media. In the study, the author examines whether high-performing communication students consistently apply ethical standards and digital security practices outside the classroom, revealing a significant gap between demonstrated knowledge and lived digital habits.

High academic performance, limited behavioral consistency

The study was conducted during the 2025–2026 academic year at Universidad Europea de Valencia using an action-research design. A total of 59 undergraduate communication students participated. The research applied a STEM–SSH pedagogical framework, integrating science, technology, engineering and mathematics with social sciences and humanities. Evaluation followed the Kirkpatrick four-level model, which measures reaction, learning, behavior and results.

Quantitative diagnostic tests revealed exceptionally strong academic performance. Nearly all students demonstrated accurate knowledge of cybersecurity principles, digital privacy safeguards and ethical communication standards. Most participants correctly identified identity theft prevention strategies and data protection best practices. Project-based assessments, including the creation of 25 interactive infographics addressing digital ethics and AI-related risks, produced high average scores, with nearly all submissions exceeding a 9 out of 10 rating.

These results indicate strong achievement at Kirkpatrick Level 2, which evaluates knowledge acquisition and skill development. Students understood the mechanics of digital safety, misinformation detection and AI auditing in structured academic exercises.

However, the study's behavioral assessment uncovered a striking contradiction. When reflecting on their personal digital practices, many students admitted that they only occasionally scanned files for malware, rarely read privacy policies, and seldom applied consistent cybersecurity routines outside coursework. Despite their theoretical awareness, their everyday digital behaviors lacked sustained vigilance.

This gap between competence and conduct confirms the study's central hypothesis. Academic success does not automatically translate into durable ethical habits. Students may excel in controlled educational environments yet revert to convenience-driven behaviors in daily digital life.

The author identifies this phenomenon as showcase digital competence, a condition in which individuals demonstrate high performance in evaluative contexts but fail to internalize consistent ethical practice.

Generative AI, misinformation and the "Data Porridge" effect

The study examines how students perceive generative AI's impact on information ecosystems. Through qualitative forum discussions, participants expressed concern about the epistemic instability created by large language models and automated content generation systems.

Students described generative AI outputs as blending reliable and fabricated information into indistinguishable streams. The metaphor of "data porridge" emerged to capture this mixture of verified knowledge and synthetic content. Participants recognized that algorithmically generated text can obscure original sources, complicate traceability and weaken accountability in digital communication.

The research situates these concerns within a broader concept termed Globofriction, referring to the destabilizing speed of technological acceleration in global media systems. As generative AI tools proliferate, communicators face pressure to produce content rapidly while navigating uncertain information authenticity.

Participants acknowledged the risk that AI-generated content could amplify misinformation during crises. Automated systems may inadvertently spread inaccuracies or reinforce biases embedded in training data. Despite recognizing these threats, students maintained that human oversight remains essential. They emphasized that final verification responsibility must rest with communicators rather than automated systems.

The study confirms that structured pedagogical scaffolding improves students' ability to audit AI outputs critically. Exercises focused on detecting hallucinations, verifying references and analyzing algorithmic bias strengthened analytical skills. Yet this competence remained largely theoretical unless reinforced through sustained behavioral practice.

The findings highlight a critical tension in communication training. Universities successfully teach students how to evaluate AI-generated information, but long-term behavioral transformation requires deeper cultural and habitual shifts.

Rethinking digital literacy for ethical resilience

The research advances three main conclusions. First, the Prosumer Gap persists even among digitally literate students. The term prosumer reflects the dual role of individuals as both producers and consumers of digital content. While students demonstrated high-level academic literacy, their self-regulation in everyday digital environments was inconsistent.

Second, reflective and dialogic learning environments enhance ethical resilience. Forum discussions encouraged students to articulate concerns about AI manipulation, democratic erosion and algorithmic opacity. These conversations strengthened awareness of generative AI's systemic implications.

Third, ethical resilience requires more than cognitive understanding. Habit formation and behavioral reinforcement are essential. Without integrating ethical practice into daily routines, knowledge risks remaining performative.

The study argues that communication education must evolve beyond tool proficiency. Teaching students to use generative AI platforms effectively is insufficient. Programs must cultivate digital sovereignty, encouraging students to question algorithmic outputs, monitor their own digital behaviors and maintain independent judgment.

The research also underscores the importance of interdisciplinary integration. Combining technical literacy with ethical reasoning ensures that communication professionals can navigate AI-driven environments responsibly. STEM competencies must be balanced with social science insights into media ethics, governance and democratic accountability.

Importantly, the study situates communication training within a broader societal transformation. As generative AI systems become embedded in news production, marketing, public relations and crisis communication, ethical resilience becomes a strategic necessity.

The algorithmic era challenges traditional notions of authorship, verification and accountability. Automated systems can produce persuasive narratives at scale, often without transparent sourcing. Future communicators must therefore function as auditors, interpreters and ethical gatekeepers within complex information ecosystems.

Organizations adopting generative AI tools must recognize that technical training alone will not guarantee ethical compliance. Behavioral reinforcement, cultural norms and institutional safeguards are equally critical.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback