Government Mandates AI-Generated Content Labelling to Ensure Online Transparency
The Indian government has proposed changes to IT regulations aiming to mandate clear labelling of AI-generated content. This move seeks to boost transparency and accountability by ensuring creators mark synthetically-produced media. Large platforms like Facebook and YouTube bear responsibility for flagging synthetic information to curb misinformation.
- Country:
- India
The Indian government has introduced proposed changes to the Information Technology regulations, aiming to ensure transparency in content creation on digital platforms. As highlighted by Electronics and IT Secretary S Krishnan, the emphasis is on labelling AI-generated content, not censorship. The new rules would require content creators to clearly label synthetic media, allowing audiences to make informed judgments regarding authenticity.
The amendment addresses the rising challenge of deepfakes and misinformation, marking a considerable shift in how digital content is managed. Large platforms, including social media giants like Facebook and YouTube, are expected to play a pivotal role in verifying and flagging synthetic information to minimize harm from misleading content.
By focusing on labelling rather than restricting AI content, India underscores its commitment to fostering innovation while ensuring accountability. The draft amendment, which is open for public comment until November 6, 2025, also calls for embedding metadata and ensuring visibility of labels, strengthening the legal framework against misleading synthetically-generated media.
ALSO READ
-
Top Court Urges Timely CIC and SIC Appointments Amidst Growing Transparency Concerns
-
Federal Reserve's Shift Towards Transparency in Stress Tests
-
EU Targets Meta and TikTok for Transparency Violations
-
NHAI Enhances Transparency with Toll Pass Information Display
-
UN Expert Warns Against Migration Control, Calls for Accountability and Transparency in Border Governance