AI in healthcare must protect patient agency and doctor oversight


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 19-02-2026 12:19 IST | Created: 19-02-2026 12:19 IST
AI in healthcare must protect patient agency and doctor oversight
Representative Image. Credit: ChatGPT

A new study published in the journal AI & Society suggests that the future of AI in medicine will depend less on raw computing power and more on whether patients trust it. The research examines how citizen voices can shape the ethical and technical development of AI systems used in healthcare, focusing on skin cancer diagnosis and treatment.

Their study, titled Enhancing trust and agency: integrating citizen perspectives into AI-assisted shared decision-making in medicine, is based on the OCTOLAB project, an interdisciplinary initiative that combines optical coherence tomography imaging with AI-based analysis and laser therapy to detect and treat basal cell carcinoma, the most common form of skin cancer worldwide.

Doctor–patient relationship under pressure

Basal cell carcinoma is often diagnosed at later stages, despite being highly treatable when detected early. The OCTOLAB project seeks to change that by training AI systems to analyze high-resolution imaging, identify tumor thickness and subtype, and calibrate laser treatment parameters accordingly. In theory, this could reduce invasive procedures and personalize therapy.

But the research team did not begin with algorithms. Instead, they organized a scenario-based focus group in Augsburg, Germany, involving 13 citizens recruited from the local community. Participants were introduced to realistic scenarios of AI use in dermatology, including hospital-based diagnosis, nurse-operated systems and even home-use AI devices.

From these discussions, one theme surfaced repeatedly: concern over how AI might reshape the doctor–patient relationship. Participants expressed a strong preference for AI as a supportive tool rather than a replacement for clinical judgment. While acknowledging that AI systems may process more data than humans and potentially improve diagnostic accuracy, they emphasized the irreplaceable role of human empathy, trust and contextual understanding in medical care.

The fear was not that AI would make mistakes. It was that doctors, under time pressure or institutional incentives, might rely too heavily on automated recommendations without sufficient scrutiny. The risk of automation bias, in which clinicians defer to algorithmic outputs, emerged as a central concern.

Trust, participants indicated, does not rest solely on technical performance. It is rooted in communication. Patients want doctors who can interpret AI findings, explain them in understandable terms and remain accountable for final decisions. An opaque system embedded in a rushed clinical workflow risks eroding confidence, even if its statistical accuracy is high.

The researchers took these concerns seriously. Within the OCTOLAB framework, medical training sessions are being revised to address automation bias and reinforce the primacy of human oversight. The project also integrates explainable AI techniques designed to produce outputs that clinicians can meaningfully interpret and communicate.

Rather than presenting AI as an autonomous authority, the team positions it as an extension of medical tools. The goal is not to automate care but to enhance it, strengthening rather than weakening the doctor–patient bond.

Transparency, data bias and the fight for patient agency

If the doctor–patient relationship is the emotional core of medical care, patient agency is its ethical foundation. The study reveals that citizens do not uniformly reject AI's so-called black box nature. Some participants compared it to the limited insight patients already have into doctors' reasoning, arguing that institutional credentials often substitute for deep understanding.

Yet the demand for transparency was clear. Participants called for accessible information about when and how AI is used in their care. They wanted to know what kinds of data trained the system, whether certain populations were underrepresented and whether they could opt out of AI-supported diagnostics without penalty.

Data protection and anonymization were particularly sensitive topics. Citizens raised concerns about whether medical practices are adequately prepared to safeguard patient data in increasingly digital environments.

The issue of algorithmic bias loomed large. Dermatology AI systems, like many medical algorithms, have historically relied on datasets dominated by lighter skin tones. This imbalance can result in reduced diagnostic accuracy for individuals with darker skin, potentially reinforcing health disparities.

The research team acknowledges this risk and outlines several mitigation strategies. These include diversifying training datasets, exploring demographically tailored models and implementing federated learning approaches that allow multiple institutions to train AI systems collaboratively without centralizing sensitive data.

Transparency, the study argues, must extend beyond technical explanations. It must include honest communication about limitations, uncertainty and potential bias. The researchers introduce the concept of explainable AI literacy, emphasizing that clinicians need the skills to critically evaluate AI outputs and convey their implications responsibly.

Confidence scores, visual overlays and probability estimates are being integrated into OCTOLAB's imaging systems to help both doctors and patients understand how recommendations are generated. However, the researchers stress that explainability alone does not guarantee agency. Agency requires that patients can question, accept or decline AI involvement in their treatment without coercion or disadvantage.

The distinction between informed consent and meaningful participation becomes central. In AI-assisted medicine, patients must not become passive recipients of algorithmic decisions. Instead, they should remain active participants in shared decision-making.

Context matters: Diagnosis vs therapy, equity vs efficiency

The third major finding of the study concerns context. Citizens did not view AI use in medicine as a uniform phenomenon. Their acceptance depended on how, where and for what purpose the technology was deployed.

Diagnostic applications of AI were generally viewed more favorably than therapeutic ones. Participants were more comfortable with AI suggesting possible diagnoses than with influencing treatment decisions. The idea of AI calibrating laser therapy parameters, as in the OCTOLAB system, prompted greater skepticism.

This distinction highlights a perception gap. While AI-based diagnostic tools are increasingly discussed in public discourse, therapeutic applications remain less familiar and can evoke fears of depersonalized or automated treatment.

The researchers recognize that clearer communication is needed to explain how therapeutic recommendations remain under physician control. In OCTOLAB's design, AI outputs inform but do not replace clinical decision-making.

The healthcare setting also shaped attitudes. Scenarios in which trained nurses used AI tools raised questions about professional roles and accountability. A home-use AI device, meanwhile, triggered concerns about inequality and commercialization.

Participants worried that advanced AI systems might deepen class-based disparities, with wealthier patients accessing cutting-edge diagnostics while others rely on overstretched public services. They questioned whether economic incentives or insurance reimbursement structures might push AI adoption for cost-saving rather than patient-centered reasons.

These reflections connect AI ethics to broader systemic issues. Even the most transparent algorithm can lose legitimacy if embedded in a healthcare system perceived as profit-driven or inequitable.

To address concerns about rare conditions and imbalanced datasets, the OCTOLAB team is exploring active learning methods. In this approach, the AI system queries clinicians for targeted annotations, continuously improving performance while keeping medical experts in the loop. This iterative design ensures that AI learns from doctors rather than replacing them.

The interdisciplinary nature of the project stands out. Ethicists, social scientists, dermatologists and computer scientists collaborate throughout development. Ethical reflection is embedded from the outset rather than appended after technical design.

The study acknowledges some limitations. The focus group was small and racially homogeneous, reflecting local recruitment methods. Participants were citizens rather than patients directly affected by basal cell carcinoma, which may explain why discussions centered on general AI concerns rather than disease-specific details. The findings are exploratory rather than statistically representative.

Even so, the impact on system design was tangible. Training protocols were revised. Explainability features were refined. Data bias strategies were prioritized. The research demonstrates that citizen engagement can influence technical architecture in concrete ways.

The study argues that the future of AI in medicine will not be decided solely by performance metrics. It will depend on whether development teams recognize that AI systems reshape authority, trust and accountability within clinical environments.

Shared decision-making, long a cornerstone of patient-centered care, becomes more complex when algorithms enter the consultation room. The study suggests that strengthening this model requires more than better code. It requires institutional safeguards, regulatory clarity and sustained public dialogue.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback