India's AI Strategy: Balancing Innovation and Safety

Prime Minister Narendra Modi discusses India's approach to AI, focusing on enhancing safety through regulation, watermarking AI content, and global cooperation. Stressing the need for accountability, he introduces measures such as the IndiaAI Safety Institute and calls for international standards to prevent AI misuse.


Devdiscourse News Desk | Updated: 17-02-2026 19:16 IST | Created: 17-02-2026 19:16 IST
India's AI Strategy: Balancing Innovation and Safety
Prime Minister Narendra Modi (File Photo/ANI). Image Credit: ANI
  • Country:
  • India

Prime Minister Narendra Modi has called attention to potential misuse of artificial intelligence, including deepfakes and threats to vulnerable groups, underscoring India's efforts to bolster its regulatory frameworks. These actions include watermarking AI-generated content, amplifying data protection, and founding the IndiaAI Safety Institute to advocate ethical AI deployment.

Modi emphasized India's pursuit of a balanced AI approach that fuels innovation while instituting robust safeguards against misuse. He urged global collaboration and responsible governance to achieve safe and equitable "AI for All." In an ANI interview, he remarked, "Technology can augment human abilities, yet ultimate decision-making must rest with humans. Around the world, societies are deliberating AI's application and governance. India is influencing this debate by demonstrating that security and innovation can coexist."

India's commitment to AI safety is not confined to national boundaries. Just as international norms govern aviation and shipping, Modi advocates for universal AI standards. He highlighted India's role in the 2023 GPAI declaration and global AI discussions, pressing for a harmonized approach that advances innovation while enforcing safety measures for AI's responsible use.

Modi called for a global AI compact rooted in foundational principles like effective human oversight, safety-by-design, and transparency. The compact should categorically prohibit AI's use for deepfakes, crime, or terrorism. With the launch of the IndiaAI Safety Institute in January 2025, India has created a platform to encourage the ethical and safe application of AI technologies.

As AI technology progresses, Modi stresses the increasing need for responsibility. India's unique focus on local risks incorporates national security issues and threats to marginalized communities, including deepfakes that target women and child safety. In response to rising deepfake incidents, India has introduced rules for watermarking AI content and removing harmful synthetic media. The Digital Personal Data Protection Act further fortifies user rights and data protection in the digital landscape.

Give Feedback