India has unveiled sweeping new rules requiring social media platforms to remove unlawful content within just three hours of being notified — a significant tightening of the previous 36-hour deadline.
The revised guidelines, which come into force on 20 February, will apply to major platforms including Meta, YouTube and X, as well as to AI-generated material. The government has not publicly explained the decision to shorten the compliance window.
However, critics say the move reflects a broader escalation in oversight of online content and warn it could heighten the risk of censorship in the world’s largest democracy, home to more than a billion internet users.
In recent years, authorities have relied on existing Information Technology rules to direct platforms to remove content deemed unlawful under legislation relating to national security and public order. Experts argue these provisions grant sweeping powers over digital speech.
Transparency reports indicate that more than 28,000 URLs were blocked in 2024 following government directives. The BBC has contacted the Ministry of Electronics and Information Technology for comment on the changes. Meta declined to respond, while X and Google, which owns YouTube, have also been approached for reaction.
The amendments also introduce new provisions targeting AI-generated material. For the first time, the law formally defines such content, covering audio and video that has been created or manipulated to appear authentic — including deepfakes. Routine editing, accessibility enhancements and legitimate educational or design work are excluded from the definition.
Platforms that enable users to create or share AI-generated content will now be required to clearly label it. Where technically feasible, they must also embed permanent identifiers to trace its origin. Once applied, these labels cannot be removed.
In addition, companies must deploy automated systems to detect and prevent illegal AI content, including deceptive or non-consensual material, forged documents, child sexual abuse material, explosives-related content and impersonation.
Digital rights groups and technology experts have expressed concern about both the practicality and implications of the new framework.
The Internet Freedom Foundation warned that the compressed timeline would effectively turn platforms into “rapid-fire censors”.
“These impossibly short timelines eliminate any meaningful human review, pushing platforms toward automated over-removal,” the group said in a statement.
Anushka Jain, a research associate at the Digital Futures Lab, welcomed the requirement to label AI content, saying it could enhance transparency. However, she cautioned that the three-hour deadline may drive companies toward full automation.
“Platforms are already struggling to meet the 36-hour window because it involves human oversight. If the process becomes fully automated, there is a high risk of excessive censorship,” she told the BBC.
Delhi-based technology analyst Prasanto K Roy described the new system as “perhaps the most extreme takedown regime in any democracy”.
He argued that compliance would be “nearly impossible” without extensive automation and minimal human review, adding that the strict timeframe leaves little opportunity for platforms to evaluate whether removal requests are legally justified.
On AI labelling, Roy said the objective was constructive but warned that dependable, tamper-proof labelling technologies are still evolving.































