The Ministry of Electronics & Information Technology (MeitY) has proposed key amendments to the IT Rules, 2021, that could change how AI-generated content appears online. For the first time, ‘synthetically generated information’ will be formally defined and platforms will be required to clearly label such content.
Under the new proposal, any AI-created asset, be it a brand campaign, influencer post, or viral meme, must carry a visible or audible disclosure. This identifier should either cover at least 10% of the screen for images and videos or play within the first 10% of any audio clip, ensuring users can immediately distinguish AI-generated material.
The move also coincides with the Advertising Standards Council of India’s (ASCI) announcement to deploy artificial intelligence and machine learning tools to detect misleading, manipulated, or deepfake ads. This dual regulatory push signals India’s intent to bring accountability and transparency to an ecosystem where synthetic content is evolving faster than the rules that govern it.
Manisha Kapoor, CEO and Secretary General, ASCI, shares that with the sheer volume and borderless nature of digital ads, manual monitoring poses significant challenges. To address this, ASCI is working towards building a robust tech ecosystem that not only enables more efficient monitoring but also provides advertisers with tools to proactively assess and correct their advertisements for compliance.
“These models are being trained on Indian-specific advertisements and ASCI’s self-regulatory guidelines. In time, we hope to enable advertisers, especially smaller ones without dedicated compliance teams to pre-test their creative content before launch and identify potential regulatory issues. Depending on the accuracy and ease of use of such technology, we could decide various ways in which it can be used to streamline and accelerate our work streams,” she says.
For brands, agencies, and creators, these developments could redefine creative workflows and compliance protocols. The proposed labels may challenge how campaigns are produced and presented, while the use of AI for ad monitoring could transform how digital authenticity is policed.
According to Ambika Sharma, Founder and Chief Strategist, Pulp Strategy, execution will determine whether the move strengthens transparency or burdens legitimate creators.
“Most major platforms already carry an AI disclosure button, and responsible brands and agencies like ours are not the source of misinformation or deepfakes. The real accountability gap lies with unregistered creators and unverified public content that circulates without context. Regulating that is far harder than monitoring established advertisers,” she says.
Drawing a parallel with influencer marketing’s early days, Sharma notes that “initially, the ASCI guidelines and the process faced resistance, but over time, verification, platform integration, and industry collaboration improved compliance.” She believes a similar framework could work for AI disclosures. “Registered brands and agencies can transparently declare AI-assisted content through existing metadata or platform tools, possibly via a one-time registration that whitelists their accounts, while enforcement focuses on non-compliant creators, mischief-makers, and misinformation sources,” she adds.
Shweta Purandare, an advertising compliance expert and Founder, Tap-a-Gain.com, welcomes the proposed amendments by MeitY and recognise the growing risk of unlabelled AI-generated content.
However, she points out that the proposal marks a shift in regulatory thinking. “Given that in August 2025, Indian regulators, the Central Consumer Protection Authority (CCPA) and ASCI, had expressed that there is no need for such disclosure, it will be interesting to see how these stakeholders respond to these proposed rules,” she states.
Purandare believes that while disclosure requirements may add another layer to campaign ideation and creation, they are essential for consumer trust. “While such disclosure would be an additional layer in the campaign process, it is a vital step in overall consumer interest,” she notes. “Adapting to such change should not be difficult for advertisers and agencies, as AI is making the entire process itself quite speedy and efficient.”
Citing real-world examples, she adds, “Many brands are currently experimenting with generative AI. For example, Star Health has come up with advertising campaigns with multiple ads using generative AI but does it have such disclosure in place for a lay consumer to understand that?”
Purandare warns that the absence of disclosure could easily mislead audiences. “Disclosure is a basic minimum guardrail expected from advertisers because the lack of it can mislead consumers into assuming real human endorsement or authenticity,” she says. “It would be difficult to gauge how it would impact the FMCG and beauty/cosmetics sectors in particular, where before-and-after effects could be AI-generated.”
For mFilterIt, the focus is on embedding intelligence directly into the creative process to ensure brands stay compliant in real time.
Amit Relan, CEO and Co-founder of mFilterIt, says, "As MeitY and ASCI move toward stricter oversight, the industry needs more than manual checks—it needs intelligence built into the creative pipeline."
“At mFilterIt, our AI-ML stack already does this with precision. We use NLP to analyse every word in a creative, computer vision to inspect visual and graphic elements, and frame-by-frame recognition to detect anything that’s manipulated, misleading, or synthetically generated,” he adds.
As generative AI tools also become integral to content creation, the line between creative enhancement and manipulation is increasingly blurred, raising urgent questions about authenticity in influencer marketing. While AI can speed up production, improve visuals, or craft compelling narratives, it also makes it easier to exaggerate claims or fabricate endorsements.
“Authenticity in influencer marketing isn't fundamentally changed by AI; it's always involved some level of enhancement through filters and editing,” says Karan Pherwani, Vice President, Chtrbox. “What's different now is scale and capability.”
Pherwani notes that audiences are becoming more adept at spotting AI-generated visuals. “From a visual perspective, audiences are becoming better at recognizing AI-generated content. In India, AI usage in influencer marketing remains limited currently. We have detection tools available to identify AI-created content, though they'll always be evolving alongside generation technology,” he explains.
Chtrbox’s strategy focuses on three key pillars: “Contractual disclosure clauses requiring creators to declare AI use upfront, detection tools to verify content authenticity, and mandatory disclosure labels on AI-enhanced content to comply with emerging regulations and prevent audience deception,” Pherwani says.
The proposed MeitY amendments and ASCI’s AI-driven monitoring signal a major shift in India’s digital advertising landscape. By mandating clear disclosure and strengthening verification, regulators aim to protect consumer trust while allowing creative innovation to continue.

























