Why users embrace or abandon generative AI: Critical adoption drivers


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-02-2026 09:46 IST | Created: 23-02-2026 09:46 IST
Why users embrace or abandon generative AI: Critical adoption drivers
Representative Image. Credit: ChatGPT

The global surge in generative artificial intelligence (genAI) use has raised a critical question: what makes adoption sustainable over time? While early adoption rates are high, long-term engagement depends on cognitive efficiency, trust, and seamless integration.

In Digital Adoption of Generative AI Tools: A Multi-Theory Model Linking Cognitive Load, User Perceptions, and System Attributes, published in Sustainability, the author reframes generative AI adoption as a digital sustainability challenge. The study integrates psychological, cognitive, and system-level factors to explain why perceived usefulness alone is insufficient without minimizing mental load and optimizing system design.

The findings suggest that generative AI adoption is not simply a matter of usefulness or ease of use. Instead, it is a cognitively constrained and system-dependent process, where mental load, transparency, system quality, friction reduction, and digital integration interact to determine whether behavioral intention translates into sustained actual use.

Perceived value still drives use, but cognitive load limits it

The study confirms that the core principles of the Technology Acceptance Model remain highly relevant in the generative AI era. Perceived ease of use significantly increases perceived usefulness. When users find generative AI systems simple to learn and interact with, they are more likely to view them as valuable tools for enhancing productivity and performance.

Perceived usefulness emerges as the strongest predictor of user attitude. Users who believe generative AI improves task efficiency, enhances output quality, or saves time are more likely to form positive evaluations of the technology. These favorable attitudes strongly influence behavioral intention, which in turn leads directly to actual system use. Behavioral intention proves to be the most immediate predictor of real-world usage patterns.

However, the study identifies a critical constraint that traditional acceptance models often overlook: mental load.

Mental load reflects the cognitive effort required to process generative AI outputs. Because these systems produce dynamic, context-dependent, and sometimes ambiguous responses, users must invest mental resources to interpret, evaluate, and verify the results. High mental load significantly reduces perceived usefulness and weakens positive attitudes toward generative AI.

The effect of mental load on perceived usefulness is statistically significant and negative. When users experience cognitive overload, stress, or fatigue while interacting with AI-generated content, they are less likely to perceive the system as beneficial. The negative effect on attitude is smaller but still meaningful, indicating that cognitive strain does not entirely deter users, but it constrains their enthusiasm.

This finding positions Cognitive Load Theory as a central explanatory lens. Generative AI interactions impose both intrinsic load, related to task complexity, and extraneous load, related to system design and information presentation. The study conceptualizes mental load primarily as extraneous load, emphasizing that poorly structured outputs, excessive information, or ambiguous explanations intensify cognitive burden.

Importantly, the research shows that users may tolerate some cognitive strain if instrumental benefits are strong. When generative AI delivers clear performance gains, perceived usefulness can partially offset mental fatigue. This nuanced dynamic explains why adoption persists even when interactions are mentally demanding.

System attributes shape the adoption pathways

The most significant theoretical contribution of the study lies in demonstrating that system-level attributes moderate the psychological pathways described by traditional acceptance models. Generative AI adoption depends not only on what users think and feel, but also on how the system behaves.

Four GenAI-specific attributes were examined as moderators: output quality, transparency, friction reduction, and system integration.

Output quality significantly mitigates the negative effect of mental load on perceived usefulness. When generative AI systems produce accurate, reliable, and coherent responses, users are more likely to maintain a sense of value even under cognitively demanding conditions. High-quality outputs buffer the impact of mental strain.

Transparency strengthens the relationship between perceived usefulness and attitude. When systems provide clear explanations or make their reasoning understandable, usefulness more effectively translates into positive evaluations. Transparency reinforces trust and reduces uncertainty, particularly in contexts where generative AI produces complex or probabilistic outputs.

Friction reduction enhances the conversion of positive attitudes into behavioral intention. Systems that minimize unnecessary steps, reduce interaction complexity, and streamline workflows allow users to act on their favorable perceptions more easily. Reduced friction lowers the effort threshold required to commit to regular use.

System integration strengthens the link between behavioral intention and actual use. When generative AI tools integrate smoothly with existing digital platforms and workflows, intentions are more likely to become habitual behavior. Seamless integration reduces contextual barriers that often prevent users from embedding new technologies into daily routines.

Collectively, these moderating effects show that system design choices are not peripheral enhancements. They fundamentally shape the strength of adoption pathways. Generative AI adoption is therefore a joint product of psychological evaluation and technical architecture.

The extended TAM–CLT–D&M model significantly outperforms a baseline Technology Acceptance Model in both explanatory and predictive power. The integrated model explains a greater proportion of variance in perceived usefulness, attitude, behavioral intention, and actual use. Predictive assessments confirm that including system-level moderators substantially improves forecasting accuracy.

This advancement responds to growing calls within information systems research to move beyond explanation toward prediction. The extended model demonstrates that incorporating cognitive load and system attributes results in stronger out-of-sample predictive performance.

Sustainability, inclusion, and long-term digital transformation

Sustainable generative AI adoption requires minimizing unnecessary mental load while maximizing transparency and integration. Excessive cognitive burden risks digital fatigue, disengagement, and inefficient use of human cognitive resources. Over time, such strain may undermine long-term viability.

As for practical implications, the study highlights the following:

  • Designers and developers: Prioritizing output quality, explainability, and friction reduction is essential. Transparent reasoning interfaces and context-aware prompts can reduce interpretive strain. Streamlined workflows lower interaction barriers. Seamless integration within existing applications enhances habitual use.
  • Organizations implementing generative AI: System integration and user experience design should be strategic priorities. Favorable attitudes and strong intentions are insufficient if technological friction or poor integration disrupts translation into actual use.
  • Policymakers and educational institutions: Reducing cognitive strain improves accessibility for non-expert users and individuals with lower digital literacy. Inclusive system design broadens participation in AI-driven environments and reduces inequality in digital competence.

The Importance–Performance Map Analysis further highlights that behavioral intention and attitude are the strongest drivers of actual use but perform slightly below optimal levels. This suggests that while users recognize the value of generative AI, interaction challenges and contextual friction still limit full engagement. Targeted improvements in transparency and integration may unlock stronger alignment between perception and behavior.

Lastly, the study acknowledges a few limitations too. Data were collected through a cross-sectional survey in a single national context, relying on self-reported measures. Longitudinal research and real-time behavioral data could further validate the model. Additional GenAI-specific constructs such as AI safety, personalization depth, and hallucination control may also enrich future frameworks.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback