India's IT Rules Tackle Deepfakes: Mandatory Labelling for AI Content
India proposes amendments to IT rules, requiring explicit labelling of AI-generated content to curb deepfake and misinformation threats. The proposed rules hold major social media platforms accountable for verifying synthetic information. Comments on the draft amendment will be collected by November 6, 2025.
- Country:
- India
The Indian government has proposed amendments to the IT rules, requiring the distinct labelling of AI-generated content to counter the threats posed by deepfakes and misinformation. These changes are aimed at increasing the accountability of major platforms such as Facebook and YouTube, amid growing concerns over synthetic media's impact on society.
According to the IT Ministry, the prevalence of deepfake audio, videos, and synthetic media highlights the potential misuse of generative AI to create misleading content. Such media can be weaponized to spread misinformation, damage reputations, and manipulate elections, prompting the government to mandate clear identification and traceability of synthetically generated content.
The proposed amendments also call for social media platforms to embed metadata in modified content and enforce strict compliance measures to maintain the integrity of labelled information. Failure to adhere to these rules could result in the loss of safe harbour protections, emphasizing the importance of transparency in distinguishing synthetic from authentic media.
ALSO READ
-
India Moves to Regulate AI Content Amid Misinformation Concerns
-
India's Crackdown on Deepfakes: New Rules for AI and Social Media
-
Regulation on deepfakes soon, two semicon units operational now: Vaishnaw
-
MCC Enforced in Bihar; EC Warns Against AI Misuse and Deepfakes in Polls
-
Election Commission Warns Against AI Deepfakes in Bihar Polls