GenAI chatbot trust in higher education hinges on perceived value and attitude

The findings reveal a critical hierarchy in how students evaluate chatbot technologies. Attitude emerged as the strongest predictor of both behavioral intention and brand trust, a finding that highlights the centrality of positive emotional perception in technology adoption. When students believe chatbots are beneficial, reliable, and aligned with their academic needs, their trust in the technology’s brand deepens significantly.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-10-2025 09:51 IST | Created: 23-10-2025 09:51 IST
GenAI chatbot trust in higher education hinges on perceived value and attitude
Representative Image. Credit: ChatGPT

Student trust in generative artificial intelligence (GenAI) powered chatbots depends less on technological simplicity and more on perceived usefulness, positive attitudes, and social influence, reveals a new study published in Education Sciences. The findings underscore how emotional and cognitive factors, rather than technical proficiency alone, drive student adoption of GenAI tools in academic contexts.

The study "Determinants of Chatbot Brand Trust in the Adoption of Generative Artificial Intelligence in Higher Education" offers one of the most comprehensive examinations yet of how university students form trust in educational AI systems. The studyexplores how psychological, social, and technological factors interact to influence students' behavioral intentions toward using AI chatbots for learning and academic support.

Attitude and perceived usefulness drive AI trust more than ease of use

The research surveyed 609 undergraduate students across public universities in North Central and Southwestern Nigeria, providing a statistically robust insight into how young learners interact with GenAI tools in real academic settings. Using an extended Technology Acceptance Model (TAM) integrated with constructs of brand trust and social influence, the authors applied partial least squares structural equation modeling (PLS-SEM) to map relationships between students' perceptions, attitudes, and behavioral intentions.

The findings reveal a critical hierarchy in how students evaluate chatbot technologies. Attitude emerged as the strongest predictor of both behavioral intention and brand trust, a finding that highlights the centrality of positive emotional perception in technology adoption. When students believe chatbots are beneficial, reliable, and aligned with their academic needs, their trust in the technology's brand deepens significantly.

Onthe other hand, the study found that behavioral intention, a traditional predictor of technology use, did not directly influence brand trust. This suggests that while students may express willingness to use GenAI chatbots, such intent does not necessarily translate to trust in their long-term reliability or ethical grounding.

Equally noteworthy is that perceived ease of use, long considered a key driver in technology acceptance, had a limited role. It influenced perceived usefulness, meaning that students appreciated when systems were user-friendly, but it did not directly affect attitude or behavioral intention. In short, ease alone does not earn trust; students must see tangible value in what the chatbot delivers.

Perceived usefulness, on the other hand, played a pivotal role in shaping both attitude and behavioral intention. Students who believed chatbots improved their learning performance or streamlined their academic tasks were far more likely to express trust and willingness to continue using the technology. The authors conclude that usefulness, not simplicity, is the decisive factor in building sustainable trust relationships between learners and AI tools.

Social influence shapes trust but can undermine perceived usefulness

The study highlights the profound role of social influence in shaping student perceptions. The social environment, comprising peers, instructors, and institutional culture, emerged as a powerful driver of attitude, usefulness, and behavioral intention toward chatbots.

However, this influence is not entirely positive. While peer endorsement encourages adoption, the study found that social influence negatively affected perceived usefulness. This paradox suggests that external encouragement can sometimes lead students to adopt chatbots for conformity or curiosity rather than genuine appreciation of their benefits. Over time, this can weaken perceptions of value and reduce trust in the technology.

The authors interpret this finding as a warning to institutions that rely too heavily on social persuasion or promotional campaigns to drive GenAI adoption. True trust, they argue, develops through authentic engagement and experience, not external pressure. When students independently discover that chatbots enhance their learning outcomes, their trust becomes internalized and enduring.

This distinction between social compliance and personal conviction may explain why early enthusiasm for educational AI sometimes fades. While group influence initially drives experimentation, long-term trust depends on whether students find real academic benefit. Institutions, therefore, must focus on creating meaningful chatbot interactions that deliver consistent value rather than relying solely on hype or peer endorsement.

Building brand trust in educational AI: Lessons for institutions

The research offers several implications for universities integrating AI chatbots into their learning environments. First, it emphasizes that trust is experiential. Students build confidence in chatbots through repeated positive interactions, when the system consistently provides relevant, accurate, and empathetic responses. This finding aligns with the broader understanding of brand trust as an outcome of perceived reliability and satisfaction rather than mere exposure.

Second, the authors recommend that higher education institutions develop context-specific GenAI systems tailored to their academic ecosystems rather than deploying generic commercial chatbots. Localized systems can better align with institutional values, language norms, and ethical standards, which in turn fosters credibility and trust among users.

Third, institutions should invest in AI literacy and ethical awareness programs. Many students approach GenAI with excitement but limited understanding of its design, data sources, or limitations. Educating users about transparency, privacy, and algorithmic fairness can strengthen their confidence in using AI responsibly and mitigate fears of bias or misuse.

Finally, the study calls for balanced policy frameworks that integrate innovation with oversight. Universities should ensure that chatbot deployment is accompanied by clear guidelines on data governance, content moderation, and user accountability. Such frameworks, the authors argue, are essential to maintaining the delicate equilibrium between technological advancement and ethical responsibility.

  • FIRST PUBLISHED IN:
  • Devdiscourse

TRENDING

DevShots

Latest News

OPINION / BLOG / INTERVIEW

AI cuts supply chain emissions by over 1,000 tons annually

Blockchain could revolutionize IoT security, if scalability and energy costs are solved

AI models hallucinate because they’re rewarded for guessing

How AI fuels democratic erosion and environmental collapse

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback