Is perceived usefulness the real reason students adopt AI chatbots?
Artificial intelligence chatbots have become a routine part of student life in less than three years. However, beneath the rapid adoption of tools such as ChatGPT lies a more complex question: what actually drives students to use conversational AI for learning, and what holds others back?
A new study "What Drives Students' Use of AI Chatbots? Technology Acceptance in Conversational AI," published as an arXiv preprint, applies and extends the Technology Acceptance Model to explain why university students adopt conversational AI chatbots as learning assistants.
The authors use partial least squares structural equation modeling to test how perceived usefulness, perceived ease of use, trust, subjective norms and perceived enjoyment influence students' behavioral intention to use AI chatbots in educational settings . Their findings challenge common assumptions about usability and highlight the central role of academic value, confidence and social context.
Perceived usefulness emerges as the dominant driver
According to the study, perceived usefulness is the primary determinant of students' intention to use AI chatbots for learning. When students believe that a chatbot enhances their academic performance, reduces frustration or provides meaningful learning support, they are significantly more likely to incorporate it into their routines.
This aligns with decades of research based on the original Technology Acceptance Model, which identifies usefulness and ease of use as core predictors of technology adoption. But the conversational AI context introduces new dynamics. Pitts and Motamedi show that usefulness not only directly predicts behavioral intention but also serves as a central hub linking other psychological factors to adoption .
Trust, enjoyment and subjective norms all influence how useful students perceive a chatbot to be. In turn, perceived usefulness exerts the strongest direct effect on whether students intend to use it. The model explains more than 70 percent of the variance in behavioral intention, indicating strong explanatory power .
The findings suggest that in the era of generative AI, students are not primarily asking whether a system is easy to operate. Instead, they are asking whether it helps them succeed academically.
The authors note that "usefulness" itself may carry multiple meanings. For some students, it may reflect pedagogically supportive value, such as clearer explanations, conceptual scaffolding or debugging assistance. For others, usefulness may be interpreted as convenience, including drafting text or offloading tasks. The distinction matters. If usefulness is equated with reduced effort alone, patterns of overreliance may follow. The study does not disentangle these interpretations but flags them as a key direction for future research .
Ease of use becomes an indirect factor
Contrary to expectations from earlier educational technology research, perceived ease of use does not significantly predict behavioral intention once other factors are included in the model . Instead, ease of use influences adoption indirectly through its effect on perceived usefulness.
In practical terms, this means that students who find AI chatbots easy to interact with are more likely to perceive them as useful, and that perception of usefulness drives intention. Ease of use alone, however, does not independently motivate adoption.
The authors argue that conversational AI may reduce the salience of usability concerns because it relies on familiar natural language interaction. Students ask questions and receive responses in dialogue form, without needing to learn complex interfaces or workflows. When basic interaction barriers are low, usability becomes an enabling condition rather than a decisive factor.
This shift reflects the distinctive nature of generative AI systems. Unlike traditional e-learning platforms with structured menus and fixed functionalities, chatbots simulate conversation. Once students feel comfortable typing or speaking prompts, the primary evaluation shifts toward output quality, reliability and academic value.
The study's findings suggest that future human factors research on conversational systems may need to rethink the relative weight of effort-related considerations. As interaction becomes more natural, the central adoption question moves from "Is it easy?" to "Is it worth it?"
Trust, enjoyment and social influence shape beliefs
Trust in AI plays a nuanced but critical role. The study finds that trust does not directly predict behavioral intention . Instead, it significantly influences perceived usefulness and perceived ease of use.
Students face uncertainty when interacting with generative AI systems. Chatbots can produce incorrect, biased or fabricated information. Their internal processes remain opaque. In this context, trust reflects a willingness to rely on AI-generated guidance despite uncertainty.
When students trust the system, they are more likely to interpret its outputs as helpful and easier to engage with. When trust is low, they may double-check responses, restrict usage to low-stakes tasks or disengage altogether. Trust therefore acts as an indirect reinforcer of adoption, shaping how interactions are experienced rather than directly compelling use .
Perceived enjoyment also emerges as a significant factor. Students who find chatbot interactions engaging, interesting or enjoyable are more likely to intend to use them . Enjoyment exerts both direct and indirect effects, enhancing behavioral intention while also strengthening perceptions of usefulness and ease of use.
The conversational design of chatbots may trigger social and affective responses. Research in human-computer interaction has long shown that users often attribute human-like qualities to interactive systems. In educational settings, this can translate into experiences of encouragement, companionship or reduced anxiety. The study suggests that intrinsic motivation, not just instrumental performance gains, influences AI adoption.
Subjective norms, defined as perceived social pressure from peers, instructors and institutions, do not directly predict intention either . Instead, they strongly influence trust, enjoyment and usefulness. When AI use is framed as legitimate, supported and appropriate within academic environments, students are more likely to view chatbots as trustworthy and beneficial.
This finding carries policy implications. Clear institutional guidelines, instructor modeling and classroom norms may shape how students interpret AI tools, even if they do not directly compel use. Social context provides legitimacy signals that frame AI as acceptable or questionable.
Implications for education and AI design
The research suggests that effective adoption of conversational AI in education depends less on interface design and more on clarity of academic value, calibrated trust and social framing.
- Focusing on transparent and reliable outputs: Features such as citation support, confidence indicators, explanation prompts and scaffolding mechanisms may strengthen trust and perceived usefulness.
- Articulating when and how AI use is pedagogically appropriate. Policies that distinguish between supportive use and misuse can reduce ambiguity and help students form stable beliefs about legitimacy.
- Moving beyond intention toward behavioral analysis. Because the research relies on self-reported intention rather than direct usage data, future studies should examine how frequency, task type and persistence relate to these psychological factors .
The model may evolve as AI tools mature. As reliability improves and AI literacy increases, the relative importance of trust or enjoyment may shift. Longitudinal research will be essential to track these changes.
- FIRST PUBLISHED IN:
- Devdiscourse