The Government of India has purposed a new rule for social media companies. The new rule states that all AI-generated content must be clearly labeled so people know it was made by artificial intelligence. This consists any photos, videos, or posts made or changed using AI. The label must stay on forever and cannot be taken off later.
The order came from the government on February 10, 2026. the new rule applies to big social media platforms such as YouTube, Facebook, Instagram, X and others that have millions of users. They now have to find AI-generated material using tools and technology. If the content is illegal, harmful, or deceptive, the platforms must stop it from being shared.
The government also said that platforms must send warnings to their users about the dangers of AI misuse. These warnings should be sent at least once every three months so users can understand what can happen if they forward harmful and misleading AI content.
Key point of the new law
One of the strict parts of the new rule is the three-hour takedown deadline. When the government or a court flags AI-generated content as illegal or deceptive, the platform has only three hours to take it off the internet. This is meant to make sure harmful or fake content is removed fast.
The main objective of the new rule is to make social media platforms a safer and more honest space. AI can generate deepfake videos of public figures and fake news clips that look real. Without labels, people might think AI content is true and share it without checking. The government wants to stop that.
Earlier, the government had shared draft ideas about labeling AI-made content and asked the public for comments. The draft also talked about how deepfake content and other fake media were a problem online and needed clear rules.
Some social media companies such as meta owned Instagram have already started adding features that let people label content as AI-generated on their own. This was likely in response to the draft rules and discussions with the government.
The new rule eyes on transparency. The government wants users to know what appears online is real and what is generated by artificial intelligence. It also wants platforms to take responsible and act quickly when misleading content is uploaded on their platform.
In simple terms, government wants to make sure that AI doesn’t trick people. So now social media sites have clear rules to show which content is made by machines, and they must remove anything dangerous very quickly.
Syed Ziyauddin is a media and international relations enthusiast with a strong academic and professional foundation. He holds a Bachelor’s degree in Mass Media from Jamia Millia Islamia and a Master’s in International Relations (West Asia) from the same institution.
He has work with organizations like ANN Media, TV9 Bharatvarsh, NDTV and Centre for Discourse, Fusion, and Analysis (CDFA) his core interest includes Tech, Auto and global affairs.
Tweets @ZiyaIbnHameed