INFLUENCERS must make disclosures or disclaimers when discussing subjects such as health benefits, including those related to food items, nutraceuticals, ailment prevention, treatment or medication, medical conditions, recovery methods or immunity boosting. Such disclaimers should be prominently displayed during endorsements and communications involving health-related assertions.
It’s important to note that general fitness and health advice, such as ‘drink water and stay hydrated’, ‘exercise regularly and be physically active’, ‘reduce sitting and screen time’, etc., not linked to specific products or services or not targeting specific health conditions or outcomes, are exempt from these regulations.
Additionally, celebrities, influencers and virtual influencers presenting themselves as health experts or medical practitioners should clearly differentiate between their personal views and professional advice.
A misleading practice
‘Greenwashing’ is characterised as a deceptive or misleading practice that involves concealing, omitting or hiding pertinent information. This is achieved by exaggerating, using vague, false or unsubstantiated environmental claims. It includes the use of misleading words, symbols or imagery, where the emphasis is placed on positive environmental aspects while downplaying or concealing harmful attributes.
The draft guidelines for prevention of greenwashing incorporate the following aspects:
These guidelines are applicable to all advertisements, regardless of their form, format or medium. They also extend to service providers, advertisers, endorsers, etc., associated with the advertisement.
Individuals to whom these guidelines apply are prohibited from engaging in greenwashing practices.
Environmental claims and the use of generic terms like clean, green, eco-friendly, eco-conscious, carbonneutral, etc., must not be employed without sufficient evidence and substantiation.
All environmental claims in advertisements or communications must be fully disclosed. Comparative environmental claims, which compare one product/service to another, must rely on verifiable and relevant data.
Aspirational or futuristic environmental claims should only be made when clear and actionable plans are in place detailing how those objectives will be achieved.
Coaching institute norms
Coaching institutes are often found engaging in deceptive practices in their advertisements, such as using names, photos, testimonials or videos of successful candidates without their express consent, offering 100 per cent job guarantees or presenting false testimonials and reviews.
These practices are considered misleading and coaching institutes shall be deemed to engage in misleading advertisements if they employ any of the following tactics: Withholding crucial information related to the course’s name (whether free or paid), the duration of the course chosen by successful candidates or any other significant details that could impact a consumer’s decision to opt for their services.
Making false claims about success rates, the number of selections or rankings of students in any competitive exam without providing verifiable evidence. Falsely implying that students’ success is solely attributed to the coaching without acknowledging the individual efforts of the students.
Fair, responsible AI
It is crucial to emphasise the need for consumers to verify all communications from online sources. With the rise in AI-supported cyber frauds utilising copied voices and faces, it is essential for consumers to exercise vigilance, given the growing number of reported crimes.
‘Deepfake,’ a product of digital manipulation using Artificial Intelligence, refers to counterfeit media digitally manipulated to convincingly replace one person’s likeness with another. This technology, employing deep generative methods, raises concerns about cyber frauds and unfair trade practices. In the event of cyber frauds, consumers can report incidents to the National Cyber-Crime Helpline at 1930 or the National Consumer Helpline at 1915. Additionally, they have the option to pursue mediation or file a case in consumer courts established under the Consumer Protection Act, 2019.
Under the Information Technology Act and its rules, provisions exist to protect individuals’ data privacy. If a deepfake video violates an individual’s privacy by using their likeness without consent, the victim can file a complaint under this Act.
The Information Technology Intermediary Rules mandate that social media intermediaries exercise due diligence regarding privacy policies or user agreements. They are required to inform users not to host content that impersonates another person. Furthermore, upon receiving a complaint about such content, intermediaries should remove or disable it within 24 hours if it involves impersonation on an electronic platform.