India is retracting a recent AI advisory following criticism from local and global entrepreneurs and investors.
The Ministry of Electronics and IT shared an updated AI advisory with industry stakeholders on Friday, removing the requirement for government approval before launching or deploying an AI model in the South Asian market.
Instead, firms are now advised to label under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.
This revision comes after India’s IT ministry faced backlash earlier this month from various high-profile individuals. Martin Casado, a partner at venture firm Andreessen Horowitz, described India’s actions as “a travesty.”
The March 1 advisory also signifies a shift from India’s previous hands-off approach to AI regulation. Less than a year ago, the ministry had opted not to regulate AI growth, recognizing the sector’s importance to India’s strategic interests.
Although the new advisory, like the original, has not been publicly released online, TechCrunch has obtained a copy of it.
The ministry stated earlier this month that while the advisory is not legally binding, it represents the “future of regulation” and compliance is expected from stakeholders.
The advisory stresses that AI models should not be used to disseminate unlawful content under Indian law, and should avoid bias, discrimination, or threats to the electoral process integrity. Intermediaries are also advised to use “consent popups” or similar mechanisms to explicitly inform users about the unreliability of AI-generated output.
The ministry continues to prioritize the identification of deepfakes and misinformation, recommending that intermediaries label or embed content with unique metadata or identifiers. There is no longer a requirement for firms to develop a method to identify the “originator” of a specific message.