AI Tools Struggle with Medical Misinformation from Authoritative Sources
A study reveals AI tools often misinterpret misinformation from authoritative medical sources as accurate. This issue was highlighted in research published in The Lancet Digital Health. The study found that AI tools were less likely to propagate misinformation from social media but more susceptible to errors in realistic medical notes.
A new study highlights the vulnerability of artificial intelligence systems to misinformation, especially when it appears to come from authoritative medical sources. The research, published in The Lancet Digital Health, tested 20 AI models and found they were often misled by fabricated content in doctors' discharge notes.
Unlike incorrect information shared on social media, which AI tools often questioned, errors embedded within realistic-looking hospital notes were frequently accepted and retransmitted. Dr. Eyal Klang from the Icahn School of Medicine, who co-led the study, emphasized the challenge this poses in the medical field.
Dr. Girish Nadkarni added that while AI provides potential benefits for clinicians and patients, it requires improved checks to ensure medical claims are accurate. The findings suggest a need for stronger safeguards before these systems become integral in healthcare delivery.
ALSO READ
-
Race Against the Clock: Immigration Bill Stalemate Looms Over DHS Funding
-
UK's New Campaign: Navigating Online Safety Conversations with Children
-
Fractile's £100M AI Chip Boost: Strengthening the UK's Tech Frontier
-
Remembering Catherine O'Hara: A Comedy Legend's Final Curtain
-
Trump Administration Pushes for AI Data Center Compact