Digital twins could bridge, or deepen, global healthcare inequalities

The authors argue that maintaining the human element in medicine is non-negotiable. For the virtual human twin to function as a genuine “trusted companion,” it must augment human judgment rather than replace it. The expert remains the ultimate decision authority, interpreting algorithmic insights within ethical, emotional, and social contexts that machines cannot replicate.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-10-2025 17:12 IST | Created: 28-10-2025 17:12 IST
Digital twins could bridge, or deepen, global healthcare inequalities
Representative Image. Credit: ChatGPT

A new study assesses how the rise of virtual human twins (VHTs) could transform the future of healthcare decision-making. Published in AI & Society, the study "Decider, Overruler, or Trusted Companion? An Exploration of the Advent of the Virtual Human Twin and Its Impact on Decision-Making in Healthcare," provides one of the most comprehensive analyses yet of how digital replicas of humans, powered by artificial intelligence, may influence diagnosis, treatment, and patient autonomy.

The authors explore how digital twin technology, initially developed for engineering and industrial systems, is now entering the human domain through AI-enabled health simulations. These digital entities, built from real-time medical data, behavioral metrics, and environmental information, promise personalized care and predictive diagnostics, but also pose deep ethical and social dilemmas.

The study uses expert interviews and public engagement workshops to assess how clinicians, policymakers, and citizens perceive this emerging technology and its potential to alter the balance of decision-making between humans and machines in healthcare.

The promise and complexity of the virtual human twin

The virtual human twin, a computationally intelligent replica that mirrors an individual's biological and behavioral characteristics, is designed to simulate and predict medical outcomes, enabling preventive interventions, personalized treatments, and real-time health monitoring. The authors describe this technology as a potential "digital companion" in medical practice, capable of processing vast datasets that no clinician could handle alone.

However, the study notes that human biology is far more complex and less predictable than mechanical systems, making it difficult to create accurate and stable digital twins. Unlike industrial twins used to model machines or infrastructure, virtual human twins must integrate biological variability, lifestyle factors, and psychological data. These intricacies make the modeling process prone to error, bias, and misinterpretation—especially when fed by incomplete or skewed data sources.

The research identifies three primary advantages of the VHT model. First, it could vastly improve diagnostic accuracy by synthesizing multimodal data streams, such as genomic information, medical imaging, and wearable sensor data. Second, it could help clinicians test treatment scenarios digitally before applying them to patients, reducing medical risk. Third, it may empower patients to better understand their health trajectories and participate more actively in their care.

Yet, these promises come with major caveats. The study highlights that the development of virtual human twins raises difficult questions about data privacy, consent, and ownership. Individuals may lose control over their personal health data once it becomes part of dynamic models that evolve over time. Furthermore, unequal access to digital infrastructure risks creating a new "health divide" between technologically advanced and resource-constrained healthcare systems.

Decision-making, trust, and the role of the clinician

The research highlights the changing nature of decision-making in medicine. As AI models grow more autonomous, healthcare could shift from a clinician-led process to one increasingly influenced, or even overruled, by algorithmic reasoning. The authors identify three potential roles for the VHT: the decider, where the AI system takes autonomous control; the overruler, where it challenges human judgment; and the trusted companion, where it acts as a collaborative tool enhancing clinical reasoning.

Through qualitative analysis of expert insights and workshop feedback, the study finds that both clinicians and patients are divided on how far AI systems should influence care decisions. While medical professionals welcome the efficiency and predictive capability of VHTs, they also express concern that opaque algorithms could undermine professional accountability. Clinicians fear being reduced to "operators" of algorithmic systems rather than decision-makers grounded in empathy, context, and experience.

Patients, on the other hand, display ambivalence. Many participants appreciate the potential of digital twins to improve treatment accuracy but question whether they would trust machine-generated diagnoses without human explanation. The authors note that trust in medical AI depends on transparency, interpretability, and shared responsibility. If virtual twins make decisions that are inscrutable or biased, patients may lose confidence not only in the technology but also in the healthcare institutions deploying it.

The authors argue that maintaining the human element in medicine is non-negotiable. For the virtual human twin to function as a genuine "trusted companion," it must augment human judgment rather than replace it. The expert remains the ultimate decision authority, interpreting algorithmic insights within ethical, emotional, and social contexts that machines cannot replicate.

Ethical, regulatory, and social implications

The study also explores the ethical and governance challenges surrounding the implementation of VHTs. One pressing issue is data consent—how patients can meaningfully authorize the use of their information in evolving digital systems that continue to learn from them over time. Another is algorithmic bias, which may arise if AI models are trained on data unrepresentative of global populations, potentially reinforcing existing healthcare inequities.

The authors propose that any large-scale deployment of VHTs must include clear regulatory oversight, robust transparency mechanisms, and participatory governance involving clinicians, technologists, and the public. They call for the establishment of international ethical frameworks to ensure fairness and accountability in data use, particularly in cross-border contexts where patient information may travel through multiple jurisdictions.

Moreover, the study calls for training and education across all levels of healthcare. Clinicians must gain literacy in data science and algorithmic auditing, while patients should be equipped with the knowledge to understand how AI models operate and how their personal data contribute to clinical decisions. Public trust, the authors argue, will depend on inclusivity, ensuring that all populations, not just those in high-income regions, have equitable access to digital health innovations.

In addition to governance, the authors highlight the psychological dimension of digital twin technology. The creation of a virtual "self" that predicts health outcomes introduces new questions about identity and autonomy. Patients may experience anxiety or fatalism when confronted with probabilistic predictions of disease or death. Ethical integration of VHTs, therefore, requires careful management of human emotion and expectation in the face of machine-generated foresight.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback