ASCI releases draft rules for labelling AI-generated advertisements

Advertising Standards Council of India released draft guidelines for labelling AI-generated ads under a risk-based system aligned with IT Rules 2026. It classifies content into high, medium, and low risk, aiming to curb deception while avoiding excessive labelling

Published Date – 12 May 2026, 05:23 PM

ASCI releases draft rules for labelling AI-generated advertisements

New Delhi: The Advertising Standards Council of India (ASCI) on Tuesday released draft guidelines for responsible labelling of synthetically generated Artificial Intelligence (AI) content in advertising, proposing a risk-based framework to ensure transparency and protect consumers.

The guidelines are aligned with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules (2026), amended on February 10, to ensure transparency while avoiding consumer label fatigue around synthetically-generated information.
The guidelines are open to feedback until June 13.


According to ASCI, AI use in advertising would be considered misleading or harmful only when it creates unfulfillable expectations, exploits vulnerable populations, depicts unsafe situations, or replicates a real person’s likeness without consent.

The requirement to label AI-generated content is based on the risk it poses to consumers. The guidelines classify AI-generated advertising content into three risk categories — high risk (prohibited content), medium risk (labelling required), and low risk (no labelling required). High-risk advertisements are those that are illegal, infringe on rights, make misleading claims, or violate the ASCI Code. These will violate the code even if an AI label is used.

Examples include fabricating endorsements or testimonials, exaggerating product results or features through claims or visual representations to create a misleading impression, using deep-fakes, copyrighted work or a person’s likeness without consent, among others. Medium-risk advertisements are those where AI use materially influences consumer decisions, and the lack of disclosure would mislead consumers.

Labelling is mandatory in these cases to help consumers understand the nature of the representation. Examples include using virtual or synthetically generated influencers and ambassadors, replicating a real person’s likeness or voice, even with their consent for personalised messaging, among others. Low-risk advertisements feature minor modifications or use AI in ways that have no material impact on a consumer’s ability to make an informed choice.

No label is required for Minor Enhancements: Routine editing, colour correction, noise reduction, standard blemish removal, and minor lighting tweaks that do not alter the substance or core claims of the ad Background and Ambient Elements: Purely decorative AI-generated backgrounds, abstract skylines, ambient music, jingles, etc.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *