Who is responsible when AI fails in cancer treatment?

Who is responsible when AI fails in cancer treatment?
Representative image. Credit: ChatGPT

New research examines the legal and ethical implications of AI-driven medical systems, particularly in oncology. Based on interdisciplinary expertise spanning medicine, law, and surgery, the study highlights the growing tension between technological innovation and regulatory preparedness.

The study, titled "Legal and ethical reflections on the use of artificial intelligence in the diagnosis and treatment of cancer: who assumes responsibility?" and published in Frontiers in Artificial Intelligence, provides an in-depth analysis of how AI is transforming cancer care while simultaneously challenging existing legal frameworks. It underscores a major dilemma: as AI systems become more autonomous, determining liability for medical errors becomes increasingly complex.

AI transforms cancer diagnosis and treatment but raises new clinical risks

AI is already playing a critical role in oncology, particularly in imaging diagnostics, risk prediction, and treatment planning. The study outlines how AI systems can analyze vast amounts of clinical data, including imaging scans, laboratory results, and patient histories, to identify patterns that may not be visible to human clinicians. This capability enhances diagnostic accuracy and enables more personalized treatment strategies.

In cancer care, AI-driven tools such as radiomics allow for the extraction of detailed tumor characteristics, including shape, density, and heterogeneity. These insights support clinicians in differentiating between benign and malignant tumors and in predicting disease progression. Machine learning models further enhance this process by identifying risk factors and forecasting patient outcomes based on historical data.

The study highlights that AI can significantly reduce the time required for diagnosis and treatment planning. While traditional clinical decision-making processes may take several minutes per patient, AI systems can generate recommendations in seconds, enabling faster and more efficient care delivery. This efficiency is particularly valuable in oncology, where early detection and timely intervention are critical.

Apart from diagnostics, AI is increasingly being used in treatment. Clinical decision support systems assist oncologists in selecting therapies tailored to individual patients, taking into account factors such as tumor type, stage, and genetic profile. AI is also contributing to the development of new drugs and treatment protocols by analyzing large datasets from clinical trials and research studies.

Robotic surgery represents another major area of AI application. Surgical robots equipped with AI capabilities offer enhanced precision, improved visualization, and reduced invasiveness. These systems can guide surgeons in real time, providing feedback on instrument positioning and helping to avoid damage to surrounding tissues. As a result, patients may experience fewer complications, shorter hospital stays, and faster recovery times.

However, the study cautions that these technological advances come with significant risks. AI systems rely heavily on the quality and completeness of the data used to train them. Errors in data input or algorithm design can lead to incorrect diagnoses or treatment recommendations. In oncology, where decisions can be life-critical, such errors may have serious consequences.

The research also highlights concerns about the growing autonomy of AI systems. As machine learning algorithms evolve, they may begin to make decisions without direct human oversight. While this increases efficiency, it also raises questions about reliability and accountability, particularly when outcomes are unfavorable.

Legal uncertainty deepens as responsibility for AI errors remains unclear

According to the study, there is a lack of a clear legal framework governing the use of AI in medicine. Current laws, including those related to medical malpractice and product liability, were not designed to address the complexities introduced by AI systems. As a result, determining responsibility in cases of harm caused by AI remains highly uncertain.

The study identifies multiple actors who may be held liable in the event of an AI-related error. These include the physician, who uses AI as a decision-support tool; the healthcare institution, which implements and manages the technology; and the software developer, who designs and maintains the AI system. Each of these parties plays a role in the deployment and operation of AI, making it difficult to assign responsibility in a straightforward manner.

In cases where AI is used as a support tool, the physician typically retains final responsibility for clinical decisions. However, as AI systems become more autonomous, this assumption becomes increasingly problematic. If a system generates a recommendation that a physician follows, and that recommendation proves harmful, it is unclear whether the fault lies with the clinician, the algorithm, or the underlying data.

The study also explores the implications of defective AI systems. Under existing regulations, software used in medical contexts may be classified as a medical device, subjecting it to strict safety and performance standards. If a defect in the system causes harm, the manufacturer may be held liable. However, proving such defects can be challenging, particularly in complex machine learning models where decision-making processes are not always transparent.

Another key issue is the role of data quality. AI systems depend on large volumes of data, and inaccuracies or biases in this data can lead to flawed outcomes. For example, if a model is trained on a dataset that does not adequately represent diverse patient populations, it may produce biased or unreliable results. In such cases, liability may extend to those responsible for data collection and model training.

The study emphasizes that the current legal environment is not equipped to handle these challenges. While regulations such as the General Data Protection Regulation (GDPR) address data privacy concerns, they do not fully resolve issues related to accountability and liability in AI-driven healthcare. Emerging frameworks, including the European Union's AI Act, aim to address these gaps by classifying medical AI systems as high-risk and imposing stricter requirements for transparency and human oversight.

Despite these efforts, significant uncertainties remain. The study notes that legal systems tend to evolve more slowly than technological innovations, creating a lag that can leave patients and practitioners exposed to unregulated risks.

Ethical concerns reshape doctor-patient relationships in the AI era

The integration of AI into oncology is also transforming the ethical landscape of healthcare. The study highlights concerns about how AI may alter the traditional doctor-patient relationship, which has long been built on trust, empathy, and human interaction.

AI systems excel at processing data and identifying patterns, but they lack the emotional intelligence and interpersonal skills that are essential to patient care. In oncology, where patients often face complex and emotionally charged diagnoses, the role of empathy and communication cannot be overstated. The study warns that overreliance on AI could undermine these human elements, potentially affecting patient satisfaction and outcomes.

Informed consent emerges as a critical ethical issue. Patients must be made aware of how AI is used in their diagnosis and treatment, including its benefits, limitations, and potential risks. This requirement adds a new layer of complexity to clinical practice, as healthcare providers must ensure that patients understand the role of AI in their care.

Data privacy and confidentiality are also central concerns. AI systems require access to large volumes of sensitive medical data, raising the risk of breaches and misuse. Ensuring compliance with data protection regulations is essential, but the study notes that balancing data accessibility with privacy remains a significant challenge.

Another ethical dimension involves fairness and equity. AI systems must be designed and implemented in ways that do not discriminate against certain populations. Biases in data or algorithms can lead to unequal treatment outcomes, exacerbating existing disparities in healthcare. Addressing these issues requires careful attention to data diversity and algorithm design.

The study also raises questions about the future role of clinicians. As AI systems take on more responsibilities, there is a risk that medical professionals may become overly dependent on technology. This could lead to a decline in clinical skills, particularly among younger practitioners who rely heavily on AI tools. In situations where AI systems fail or are unavailable, this dependency could have serious consequences.

Toward a balanced future of innovation and accountability

While the potential benefits of AI in oncology are substantial, they must be matched by robust legal and ethical frameworks that ensure patient safety and accountability.

Maintaining human oversight in all AI-assisted medical decisions is important. Physicians must remain the final decision-makers, using AI as a tool rather than a replacement. This approach helps preserve the integrity of clinical judgment while leveraging the strengths of technology.

The study also calls for clearer regulations that define the responsibilities of all parties involved in AI deployment. This includes establishing standards for data quality, algorithm transparency, and system performance. Legal frameworks must be adaptable, capable of evolving alongside technological advancements.

Education and training are identified as key priorities. Healthcare professionals must be equipped with the knowledge and skills needed to effectively use AI systems and understand their limitations. This includes training in data interpretation, algorithmic bias, and ethical considerations.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback