Categories: Tech and Auto

Three-Hour Deadline For Social Media Firms To Remove Objectionable Content, Govt’s Big Move On Deepfakes, AI Misuse

India has proposed new rules requiring social media platforms to label all AI-generated content, issue regular user warnings, and remove illegal or deceptive AI posts within three hours, aiming to curb deepfakes and make online content safer and more transparent.

Add NewsX As A Trusted Source
Add as a preferred
source on Google
Published by Syed Ziyauddin
Last updated: February 10, 2026 18:08:17 IST

The Government of India has purposed a new rule for social media companies. The new rule states that all AI-generated content must be clearly labeled so people know it was made by artificial intelligence. This consists any photos, videos, or posts made or changed using AI. The label must stay on forever and cannot be taken off later.

The order came from the government on February 10, 2026. the new rule applies to big social media platforms such as YouTube, Facebook, Instagram, X and others that have millions of users. They now have to find AI-generated material using tools and technology. If the content is illegal, harmful, or deceptive, the platforms must stop it from being shared.

The government also said that platforms must send warnings to their users about the dangers of AI misuse. These warnings should be sent at least once every three months so users can understand what can happen if they forward harmful and misleading AI content.

Key point of the new law

One of the strict parts of the new rule is the three-hour takedown deadline. When the government or a court flags AI-generated content as illegal or deceptive, the platform has only three hours to take it off the internet. This is meant to make sure harmful or fake content is removed fast.

The main objective of the new rule is to make social media platforms a safer and more honest space. AI can generate deepfake videos of public figures and fake news clips that look real. Without labels, people might think AI content is true and share it without checking. The government wants to stop that.

Earlier, the government had shared draft ideas about labeling AI-made content and asked the public for comments. The draft also talked about how deepfake content and other fake media were a problem online and needed clear rules.

Some social media companies such as meta owned Instagram have already started adding features that let people label content as AI-generated on their own. This was likely in response to the draft rules and discussions with the government.

The new rule eyes on transparency. The government wants users to know what appears online is real and what is generated by artificial intelligence. It also wants platforms to take responsible and act quickly when misleading content is uploaded on their platform.

In simple terms, government wants to make sure that AI doesn’t trick people. So now social media sites have clear rules to show which content is made by machines, and they must remove anything dangerous very quickly.

Also Read: Tata To Finally Manufacture Range Rover In India After 18 Years Of Acquisition, Inaugurates Tamil Nadu Plant, Aims To Produce 3 Lakh Luxury SUVs Annually

Published by Syed Ziyauddin
Last updated: February 10, 2026 18:08:17 IST

Recent Posts

From Cancellations To Tech Stops: How Airlines Are Coping With Cuba’s Fuel Shortage As Aviation Crisis Deepens Amid US Oil Blockade

Cuba has warned airlines it is out of jet fuel at international airports for at…

February 10, 2026

Is Prakash Raj The New Face After Akshaye Khanna’s Exit From Drishyam 3? Actor Breaks Silence, Says ‘Started Shooting For…’

Prakash Raj has cleared the air around Drishyam 3, saying he is not replacing Akshaye…

February 10, 2026

‘Gen-Z Were Promised Bangladesh Would Turn Into Singapore But Are Seeing It Turn Into Myanmar, Afghanistan: Ex-Bangladesh Minister | NewsX Exclusive

Former Bangladesh education minister Mohibul Hassan Chowdhury told NewsX that while China has strategic and…

February 10, 2026