More AI, less trust? Patients pull back as automation grows in healthcare
A nationwide experimental study is raising fresh concerns about how artificial intelligence (AI) is transforming patient behavior in primary healthcare systems. The research finds that deeper integration may come at a cost: declining patient willingness to seek care.
Published in Healthcare, the study titled "More AI, Less Care-Seeking? A National Survey Experiment on the Impact of AI Intensity on Patient Care-Seeking Intention in Chinese Family Doctor Services" examines how varying levels of AI involvement influence patient decisions in China's primary care system.
Based on a large-scale experimental survey involving 2,790 participants across 31 provinces, the study assesses how patients respond not just to the presence of AI in healthcare, but to the degree of its involvement in clinical decision-making.
Higher AI integration linked to sharp decline in patient care-seeking intentions
With AI taking on a more dominant role in clinical encounters, patient willingness to seek care is declining significantly. The researchers introduced a nuanced "intensity gradient," ranging from low-level administrative assistance to full AI-led decision-making.
At low levels of integration, AI functions as a background support tool, helping physicians organize patient records while maintaining full human control over clinical decisions. At medium levels, AI contributes analytical insights and diagnostic suggestions, with doctors acting as validators. At high levels, however, AI becomes the primary decision-maker, with physicians largely implementing AI-generated treatment plans.
The results show a steady decline in patient care-seeking intention across this spectrum. Compared to a human-only baseline scenario, even low levels of AI integration reduce willingness to seek care. The decline becomes more pronounced at medium levels and reaches its lowest point when AI assumes primary decision authority.
Statistical analysis confirms that this pattern is not random. The negative relationship between AI intensity and care-seeking intention is both strong and statistically significant, indicating a robust behavioral response to increasing automation in clinical settings.
This finding challenges prevailing assumptions that technological advancement in healthcare will naturally lead to greater patient acceptance. Instead, it suggests that patients are sensitive not only to the presence of AI but to how it reshapes the perceived role of the physician.
The study highlights that trust in primary care is deeply tied to the visibility of human agency. When patients perceive that decision-making authority has shifted away from the physician, their confidence in the care process weakens, leading to reduced engagement with healthcare services.
Medical-specific AI models partially offset patient resistance
While the overall trend points to declining trust with higher AI integration, the study identifies a mitigating factor: the type of AI system used. Specifically, labeling AI tools as medical-specific large language models, rather than general-purpose systems, appears to improve patient acceptance.
Across all levels of AI intensity, scenarios involving medical-specific AI consistently resulted in higher care-seeking intention compared to those using general-purpose AI models. This suggests that patients respond positively to signals of domain expertise and specialization.
The distinction between general and medical AI models is more than semantic. Medical-specific systems are framed as being trained on clinical guidelines and authoritative knowledge bases, which enhances their perceived credibility. In contrast, general-purpose AI systems lack this explicit association with medical expertise.
However, the study finds that this buffering effect has limits. While medical-specific AI improves acceptance, it does not fully counteract the negative impact of increasing AI intensity. As AI becomes more central to decision-making, even specialized systems struggle to maintain patient trust.
Interestingly, the interaction between AI intensity and AI type does not reach statistical significance, indicating that while medical AI offers a consistent advantage, it does not fundamentally alter the overall downward trend in care-seeking intention.
The findings suggest that credibility cues can influence patient perceptions but cannot fully compensate for concerns about reduced human involvement. In high-stakes contexts such as healthcare, patients appear to prioritize the presence of human judgment over the technical sophistication of AI systems.
Trust, human agency, and the limits of AI-driven healthcare
Algorithm aversion theory suggests that individuals are inherently skeptical of automated decision-making, particularly in complex and high-risk domains. This skepticism intensifies when AI systems are perceived to replace rather than support human expertise.
Trust theory further emphasizes the importance of perceived competence, benevolence, and agency in healthcare relationships. In primary care settings, where long-term relationships and continuity are central, the presence of a human decision-maker plays a critical role in building and maintaining trust.
The study's findings indicate that AI integration disrupts this dynamic by altering perceptions of who is responsible for clinical decisions. As AI becomes more visible and directive, patients may feel that their care is being standardized or depersonalized, reducing their willingness to engage with healthcare providers.
The Technology Acceptance Model, which focuses on perceived usefulness and ease of use, appears insufficient to fully explain patient behavior in this context. While AI may improve efficiency and accuracy, these benefits do not necessarily translate into greater acceptance when relational factors are at stake.
The research highlights that primary care operates under a different logic than other areas of healthcare. Unlike specialized or transactional services, primary care relies heavily on continuity, personal attention, and trust. In this environment, technological efficiency may be secondary to the perceived quality of human interaction.
The findings also point to a critical threshold in AI integration. Low and medium levels of AI involvement are generally tolerated, as long as the physician remains visibly in control. However, once AI becomes the dominant decision-maker, patient resistance increases sharply. This threshold represents a key design challenge for healthcare systems seeking to integrate AI without undermining patient trust. It suggests that maintaining a clear and visible role for human clinicians is essential for preserving patient engagement.
Implications for healthcare policy and AI system design
The findings highlight the importance of balancing technological innovation with patient-centered care. One key implication is the need for a human-in-the-loop approach, where AI supports but does not replace clinical decision-making. Ensuring that physicians remain actively involved and visibly responsible for patient care can help mitigate the negative effects of AI integration.
The research also highlights the importance of transparency in AI deployment. Patients need to understand how AI is being used and how decisions are made. Clear communication about the role of AI in clinical encounters can help build trust and reduce uncertainty.
Another critical consideration is the design of AI systems themselves. Rather than focusing solely on performance metrics, developers must consider how systems are perceived by patients. This includes not only accuracy and efficiency but also factors such as explainability, accountability, and alignment with clinical workflows.
The study further suggests that regulatory frameworks should address not only the technical performance of AI systems but also their impact on patient behavior and healthcare utilization. Policies that ensure accountability and protect patient trust will be essential as AI becomes more deeply embedded in healthcare systems.
The research also points to the need for ongoing evaluation and adaptation. As AI technologies evolve, their effects on patient behavior may change. Continuous monitoring and research will be necessary to ensure that AI integration supports rather than undermines healthcare goals.
- FIRST PUBLISHED IN:
- Devdiscourse