AI Assistants Mislead in Almost Half of News Responses: EBU-BBC Study

A study by the European Broadcasting Union and the BBC reveals leading AI assistants misrepresent news nearly half the time. Evaluating 3,000 responses, the research highlights significant sourcing and accuracy issues in AI-generated answers. The findings raise concerns about public trust and demand improved accountability from AI companies.


Devdiscourse News Desk | Updated: 22-10-2025 03:38 IST | Created: 22-10-2025 03:38 IST
AI Assistants Mislead in Almost Half of News Responses: EBU-BBC Study

Leading AI assistants have been found to misrepresent news content in nearly half of their responses, as outlined in new research by the European Broadcasting Union (EBU) and the BBC. The study, released on Wednesday, evaluated 3,000 responses from popular AI assistants such as ChatGPT, Copilot, Gemini, and Perplexity.

The research exposed that 45% of analyzed AI responses contained significant issues, with 81% having some form of error, notably in sourcing and distinguishing between opinion and fact. The study highlights concerns that AI assistants may undermine public trust, with sourcing errors found in a third of responses, particularly affecting Google's Gemini.

The study, involving directors from 22 public-service media organizations across 18 countries, calls for greater accountability among AI developers. It emphasizes the need for improved standards in AI accuracy, as these technologies increasingly replace traditional news sources for younger audiences.

TRENDING

DevShots

Latest News

OPINION / BLOG / INTERVIEW

Cyber-resilient electric mobility: AI shields EV charging stations from cyber-attacks

AI can decode medical records with near-human accuracy

New AI model could save historic monuments before they crumble

How AI systems must prove trust, transparency and reliability before clinical use

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback