India's IT Rules Tackle Deepfakes: Mandatory Labelling for AI Content

India proposes amendments to IT rules, requiring explicit labelling of AI-generated content to curb deepfake and misinformation threats. The proposed rules hold major social media platforms accountable for verifying synthetic information. Comments on the draft amendment will be collected by November 6, 2025.


Devdiscourse News Desk | New Delhi | Updated: 22-10-2025 21:42 IST | Created: 22-10-2025 21:42 IST
India's IT Rules Tackle Deepfakes: Mandatory Labelling for AI Content
  • Country:
  • India

The Indian government has proposed amendments to the IT rules, requiring the distinct labelling of AI-generated content to counter the threats posed by deepfakes and misinformation. These changes are aimed at increasing the accountability of major platforms such as Facebook and YouTube, amid growing concerns over synthetic media's impact on society.

According to the IT Ministry, the prevalence of deepfake audio, videos, and synthetic media highlights the potential misuse of generative AI to create misleading content. Such media can be weaponized to spread misinformation, damage reputations, and manipulate elections, prompting the government to mandate clear identification and traceability of synthetically generated content.

The proposed amendments also call for social media platforms to embed metadata in modified content and enforce strict compliance measures to maintain the integrity of labelled information. Failure to adhere to these rules could result in the loss of safe harbour protections, emphasizing the importance of transparency in distinguishing synthetic from authentic media.

Give Feedback