Why users distrust AI: Anthropomorphic design fails without strong institutional trust

Why users distrust AI: Anthropomorphic design fails without strong institutional trust
Representative image. Credit: ChatGPT

A new behavioral study reveals that human-like artificial intelligence may not be enough to win over consumers in fragile economies, where skepticism and institutional distrust shape how people engage with digital services. The research, based on Sierra Leone's banking sector, challenges long-held assumptions that making AI more "human" automatically boosts adoption.

Published in Behavioral Sciences, the study titled "Anthropomorphic AI and Consumer Skepticism: A Behavioral Study of Trust and Adoption in Fragile Economies" finds that anthropomorphic AI design does not directly influence consumers' willingness to adopt AI-driven banking services. Instead, its impact depends entirely on whether it can build trust and create a sense of social presence among users.

The researchers show that human-like AI features such as conversational tone, empathy, and personalization act only as indirect drivers of adoption. Their effectiveness is mediated through psychological pathways and significantly weakened by consumer skepticism rooted in institutional fragility.

Human-like AI alone cannot overcome trust deficits

Anthropomorphic AI has no direct effect on adoption behavior. Statistical analysis shows a negligible relationship between human-like design and users' intention to adopt AI banking services, contradicting evidence from high-income markets where such features often drive engagement .

Instead, anthropomorphism operates as what the researchers describe as a "relational amplifier." It influences adoption only by enhancing two internal psychological states: perceived social presence and trust in the AI system. Without these mediating factors, human-like design has no measurable behavioral impact.

Perceived social presence refers to the feeling that users are interacting with a responsive, human-like entity rather than a machine. Trust reflects confidence in the system's competence, integrity, and ability to protect sensitive financial data. Both factors were found to significantly increase adoption intentions, with trust emerging as the stronger predictor .

The findings highlight a key limitation of design-centric AI strategies. In environments where institutional trust is weak, users do not adopt technology simply because it feels familiar or engaging. Instead, they require assurance that the system is reliable and safe. Human-like cues must translate into credible trust signals before they influence behavior.

This dynamic is particularly relevant in Sierra Leone, where the banking sector is undergoing rapid digital transformation but adoption of AI tools remains uneven. The country's history of institutional fragility and limited regulatory oversight has shaped user expectations, making trust a prerequisite rather than an outcome of interaction.

Trust and social presence act as parallel drivers of adoption

The study introduces a dual-pathway model to explain how AI design influences user behavior in fragile economies. Anthropomorphic features simultaneously activate two psychological channels: an affective pathway through social presence and a cognitive pathway through trust.

Both pathways independently mediate the relationship between AI design and adoption. The analysis shows that perceived social presence accounts for roughly half of the indirect effect, while trust contributes the other half . Together, they fully explain how anthropomorphic AI shapes user intentions.

This parallel processing reflects how users in low-trust environments evaluate technology. Rather than relying on surface-level impressions, they engage in what the study describes as "dual-channel appraisal," assessing both relational warmth and institutional safety at the same time.

Social presence plays a crucial role in reducing psychological distance and making digital interactions feel more intuitive. In banking contexts, where face-to-face interaction has traditionally been the norm, this sense of connection can ease anxiety and improve user experience. However, trust remains the dominant factor. Users are more likely to adopt AI services when they believe the system can handle financial transactions securely and act in their best interest. In high-stakes domains such as banking, emotional engagement alone is not enough.

The findings also show that anthropomorphic AI significantly increases both social presence and trust, confirming earlier research on human-AI interaction. Yet, unlike in developed markets, these effects do not translate directly into behavior. They must first pass through users' internal evaluation processes.

This difference underscores the importance of context. Mechanisms that work in technologically advanced, high-trust societies may not apply in environments where users approach digital systems with caution.

Skepticism sharply weakens AI's impact in fragile economies

The study identifies consumer skepticism as a critical factor shaping AI adoption. Defined as a tendency to doubt the motives and reliability of AI systems, skepticism acts as a filter that alters how users interpret anthropomorphic cues. The results show that higher levels of skepticism significantly weaken the effects of anthropomorphic AI on both social presence and trust. Among highly skeptical users, the impact of human-like design on social presence becomes statistically insignificant, while its effect on trust is reduced by more than half .

This means that the same AI interface can produce very different outcomes depending on the user's mindset. For individuals with low skepticism, anthropomorphic features enhance engagement and trust. For those with high skepticism, the same features may be ignored or even perceived as manipulative.

The study further reveals that skepticism disrupts the emotional pathway more strongly than the cognitive one. While the influence of anthropomorphism on social presence can disappear entirely under high skepticism, its effect on trust remains partially intact, though diminished.

This pattern aligns with theories of risk perception and privacy calculus, which suggest that users weigh potential benefits against perceived risks when interacting with technology. In fragile economies, where concerns about data security and institutional accountability are heightened, skepticism increases the perceived risks associated with AI.

The findings point to what the researchers describe as a "trust duality." Anthropomorphic AI can enhance adoption in low-skepticism contexts but may fail or even backfire in high-skepticism environments. This duality highlights the limits of relying solely on design to drive user engagement.

Implications for AI deployment and financial inclusion

The study carries significant implications for banks, developers, and policymakers seeking to expand AI adoption in emerging markets. It suggests that improving interface design alone will not be sufficient to drive uptake of AI-driven services.

Instead, organizations must focus on building foundational trust through transparency, data protection, and user education. Anthropomorphic features can enhance user experience, but only when supported by credible institutional frameworks.

For financial institutions, this means prioritizing trust-building measures such as clear communication about data usage, robust security protocols, and visible accountability mechanisms. AI systems must demonstrate reliability before they can benefit from human-like design elements.

The research also highlights the need for context-sensitive AI strategies. Developers should consider allowing users to adjust the level of anthropomorphism in interfaces, catering to different levels of skepticism. In some cases, simpler, more functional designs may be more effective than highly human-like systems.

For policymakers, the findings underscore the importance of strengthening regulatory frameworks and consumer protections. Trust in AI is closely tied to broader institutional trust, and efforts to improve governance can have a direct impact on technology adoption.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback