AI Chatbot Responses: Unmasking the Risks in Health Information

A study reveals that AI chatbots often provide inaccurate medical information, with nearly half of their responses deemed problematic. Researchers stress the need for public education and oversight to prevent misinformation due to chatbot limitations in reasoning and evidence weighing.

AI Chatbot Responses: Unmasking the Risks in Health Information
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.
  • Country:
  • India

Recent research highlights the troubling inaccuracy of AI chatbots in providing medical information, raising concerns about potential public misinformation. The study, published in The BMJ Open, found that 49.6% of chatbot responses were problematic, misrepresenting scientific and non-scientific claims.

Researchers from The Lundquist Institute and UCLA emphasize the rapid adoption of generative AI in health, urging for public education and oversight. Among the five widely used chatbots tested, Grok by xAI generated the most highly problematic responses compared to others studied, such as Google's Gemini and Open AI's ChatGPT.

The study revealed that chatbots struggled most with stem cells, athletic performance, and nutrition. Fostering confidence yet falling short in accuracy and citation quality highlights the critical need to reassess AI's role in public health communications.

Give Feedback