Hidden impact of generative AI on student trust and learning behavior

Hidden impact of generative AI on student trust and learning behavior
Representative image. Credit: ChatGPT

A new study suggests that growing reliance on generative AI may be reshaping not only learning behaviors but also the very foundation of trust in educational tools. In view of this development, researchers have begun to question whether increased use of AI is fostering informed engagement or creating a deeper dependence that could influence long-term professional development.

A study by Oksana Babenko of the University of Alberta, titled "Lifelong Learning in the Age of AI: An Investigation of Trust in Generative AI Among Health Profession Students," published in International Medical Education, investigates how students' use of generative AI tools is linked to their trust in these systems as lifelong learning companions. The findings offer a critical lens into how the next generation of healthcare professionals may integrate AI into their careers, with implications for education systems, clinical competence, and patient care outcomes.

The research is based on a cross-sectional survey of 558 health profession students across multiple disciplines, including medicine, nursing, dentistry, pharmacy, and allied health fields. These participants, primarily aged between 18 and 25, represent a digitally native generation that increasingly relies on AI-powered tools such as ChatGPT and virtual tutors to support learning tasks, idea generation, and knowledge acquisition.

Rising AI usage reshapes learning habits among future healthcare professionals

The study finds that generative AI has already become a mainstream tool in health professions education. A large majority of students reported using AI-powered platforms to support their learning, with over four in five indicating active engagement with such tools. Additionally, around three-quarters of respondents use generative AI specifically to explore ideas and assist in problem-solving tasks.

This widespread adoption reflects a broader shift in how students approach education in the digital age. Learning is no longer confined to textbooks, lectures, or peer collaboration. Instead, it is increasingly mediated through interactions with intelligent systems that can generate responses, simulate explanations, and provide instant feedback.

The study situates this trend within the concept of lifelong learning, a critical competency in healthcare professions that requires continuous updating of knowledge, skills, and clinical practices. Traditionally, lifelong learning has been self-driven and grounded in critical thinking, professional judgment, and collaboration. However, the integration of generative AI introduces a new dynamic where learning is partly outsourced to technology.

This transformation raises important questions about the nature of knowledge acquisition. While AI tools can enhance efficiency and accessibility, they may also reduce opportunities for collaborative learning and interpersonal skill development. The research highlights concerns that heavy reliance on AI could limit peer interaction, weaken critical thinking, and reduce originality in problem-solving approaches.

At the same time, generative AI offers clear benefits. Students can access information quickly, generate summaries, and test ideas in real time. These capabilities can support self-directed learning, particularly in complex and rapidly evolving fields such as medicine. The challenge lies in balancing these advantages with the need to maintain foundational competencies that underpin safe and effective clinical practice.

Trust in generative AI grows alongside usage, with notable demographic differences

The study finds a strong relationship between students' use of generative AI and their trust in these systems. Statistical analysis shows that increased AI usage is closely associated with higher levels of trust in AI as a tool for lifelong learning, with usage explaining nearly 40 percent of the variation in trust levels among students.

This relationship suggests that familiarity with AI tools may reinforce perceptions of their reliability, competence, and usefulness. As students interact more frequently with generative AI, they begin to view it not only as a supplementary resource but as a dependable partner in their educational journey.

However, the study reveals that this trust is not uniform across all groups. Male students reported both higher usage and greater trust in generative AI compared to female students. While the differences are modest, they point to underlying variations in attitudes toward technology. Male students appear more inclined to perceive AI as a practical and beneficial tool, whereas female students tend to approach it with greater caution, particularly regarding ethical concerns and potential risks.

Geographic differences also emerge as a critical factor. Students from the Global South reported significantly higher levels of trust in generative AI than their counterparts in the Global North, despite similar levels of usage. This finding highlights a complex interplay between technological access, educational infrastructure, and cultural attitudes toward innovation.

Higher trust levels in the Global South may reflect limited access to traditional educational resources, making AI tools more valuable as alternative sources of knowledge. At the same time, it may also indicate lower exposure to public debates حول the risks associated with AI, including issues such as data privacy, algorithmic bias, and misinformation.

Importantly, the study cautions that higher trust does not necessarily equate to well-calibrated trust. Students who rely heavily on AI without a strong foundation in critical evaluation may be more vulnerable to errors, misinformation, and overdependence on automated systems.

Balancing innovation with critical thinking and professional competence

While generative AI presents new opportunities for enhancing learning, the study underscores the need for careful integration into health professions education. The growing trust in AI systems raises concerns about overreliance, particularly among students who are still developing their clinical judgment and critical thinking skills.

Healthcare is a domain where errors can have serious consequences. The ability to assess information critically, collaborate with colleagues, and make informed decisions is essential for patient safety. If students become overly dependent on AI-generated outputs, there is a risk that these core competencies may be compromised.

The study also highlights the importance of maintaining human interaction in the learning process. Teamwork, communication, and empathy are fundamental skills in healthcare that cannot be fully replicated by AI systems. As learning becomes more individualized through AI, educators must find ways to preserve opportunities for collaborative engagement.

Another key concern is the perception of AI as an authoritative source of information. While many students recognize the need for caution, a significant proportion express confidence in the accuracy and reliability of AI-generated content. This duality reflects a tension between trust and skepticism that will shape how future professionals use AI in practice.

To address these challenges, the study points to the need for enhanced AI literacy among students. Understanding how AI systems work, their limitations, and potential biases is essential for responsible use. Educational institutions may need to incorporate training that emphasizes critical evaluation of AI outputs, ethical considerations, and the role of human oversight.

The findings also suggest that policymakers and educators should adopt a proactive approach to integrating AI into curricula. Rather than treating AI as an external tool, it should be embedded within educational frameworks in a way that supports learning objectives while safeguarding professional standards.

A turning point for lifelong learning in healthcare

The strong link between AI usage and trust highlights both an opportunity and a risk. On one hand, AI can enhance learning efficiency and accessibility, supporting students in navigating complex information landscapes. On the other hand, unchecked trust may lead to overreliance, reduced critical thinking, and potential gaps in professional competence.

The differences observed across gender and geographic contexts further underscore the need for tailored approaches to AI integration. Educational strategies must account for diverse perspectives, ensuring that all students develop the skills needed to use AI responsibly and effectively.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback