AI Chatbot Responses: Unmasking the Risks in Health Information
A study reveals that AI chatbots often provide inaccurate medical information, with nearly half of their responses deemed problematic. Researchers stress the need for public education and oversight to prevent misinformation due to chatbot limitations in reasoning and evidence weighing.
- Country:
- India
Recent research highlights the troubling inaccuracy of AI chatbots in providing medical information, raising concerns about potential public misinformation. The study, published in The BMJ Open, found that 49.6% of chatbot responses were problematic, misrepresenting scientific and non-scientific claims.
Researchers from The Lundquist Institute and UCLA emphasize the rapid adoption of generative AI in health, urging for public education and oversight. Among the five widely used chatbots tested, Grok by xAI generated the most highly problematic responses compared to others studied, such as Google's Gemini and Open AI's ChatGPT.
The study revealed that chatbots struggled most with stem cells, athletic performance, and nutrition. Fostering confidence yet falling short in accuracy and citation quality highlights the critical need to reassess AI's role in public health communications.
ALSO READ
-
Myanmar's Amnesty Raises Questions on Aung San Suu Kyi's Future
-
Lebanese Return Uncertain Amid Fragile Ceasefire
-
Harivansh's Historic Third Term as Deputy Chairman of Rajya Sabha
-
Probes Launched into 2025 Blackout in Spain and Portugal
-
Lula and Sanchez Unite in Defense of Democracy Against Far-Right Populism