
India's Ministry of Electronics and Information Technology (MeitY) has proposed stricter rules requiring continuous and clearly visible labels on AI-generated content throughout its playback. This aims to enhance transparency and platform accountability under the updated IT rules, including faster removal of unlawful synthetic content. While experts acknowledge the potential to build trust, industry stakeholders express concerns about feasibility, increased compliance costs, and impacts on user experience. Public feedback on the amendments is open until May 7, 2026.
The articles present a range of perspectives including government regulatory intentions to increase transparency and accountability, expert views supporting stricter standards, and industry concerns about practical challenges and costs. Coverage reflects a balanced framing of policy objectives alongside implementation difficulties without favoring any political ideology or stakeholder group.
The overall tone is mixed, combining cautious optimism about improved trust and platform responsibility with critical viewpoints on the feasibility and potential negative effects of continuous labelling. The sentiment acknowledges both the regulatory ambition and the practical concerns raised by industry and commentators.
Each source's own headline, political lean, and sentiment — so you can see framing differences at a glance.
| Source | Their headline | Bias | Sentiment |
|---|---|---|---|
| economictimes | Continuous AI labelling norms to raise compliance bar, costs: Experts | Center | Neutral |
| thetribune | AI video disclosure must not become punishment - The Tribune | Center | Neutral |
thetribune broke this story on 29 Apr, 09:10 pm. Other outlets followed.
Well-covered story — coverage matches public importance.
Institutions and figures named across source coverage.
Select a news story to see related coverage from other media outlets.