AI Scam Alert: Fake Health Tips and Drug Recommendations on the Rise!

June 22, 2025

Alertan de nueva estafa que utiliza IA para recomendar medicamentos y consejos de salud falsos

Certain videos on social media manipulate and spread misinformation about health by promoting habits and products that lack scientific backing.

MEXICO CITY (apro) – On platforms like TikTok and Instagram, a new scam is making the rounds, featuring supposed health experts—created using artificial intelligence—endorsing “miracle” products and practices purported to benefit health, as warned by ESET Latin America, a cybersecurity firm.

Since the advent of accessible generative Artificial Intelligence (AI), new cybersecurity threats have emerged, the company noted. Particularly concerning are the videos, audios, or images made with AI intended to impersonate real people; these are known as “deepfakes.”

“The creation of fake images, videos, or audio for committing fraud, stealing information, or destabilizing public opinion is common today in the world of cybercrime,” said Camilo Gutiérrez Amaya, head of the Research Laboratory at ESET Latin America.

The firm recently pinpointed a trend that manipulates and disseminates health-related misinformation by advocating for habits and products without scientific backing. The ESET research team identified over 20 accounts promoting such content on Instagram and TikTok.

The analyzed videos frequently display a corner-placed “avatar” pretending to be a person and a specialist with over a decade of experience, who then provides advice that leads to the purchase of the products they are selling.

The ESET team also noted instances where unapproved drugs are promoted using “deepfakes” featuring the likeness of well-known professionals, such as the Argentine doctor Daniel López Rosetti, whose image was used to endorse a phony medication last March.

 

How to Recognize a Fake Video or “Deepfake”?

One account highlighted by the firm, @wellness_Daniela, has amassed 714 followers on TikTok and posts videos featuring supposed gynecologists, dermatologists, and other specialists generated with AI. In these videos, there are several tell-tale signs to which ESET advises paying attention to avoid being deceived.

See also  Human Rabies in Mexico 2025: Symptoms, Transmission, and Recent Cases

Here are the indicators of a potential AI:

  • Misalignment of the mouth and lips, which do not completely match the audio.
  • Unnatural or stiff facial expressions.
  • Visual artifacts and distortions, such as blurry edges or sudden changes in lighting.
  • Robotic, artificial voice or overly consistent tone.
  • Newly created accounts with few followers or no significant history.
  • Exaggerated language, with phrases like “miracle cure,” “doctors don’t want you to know this,” or “100% guaranteed.”
  • Claims unsupported by science or based on studies or sources of low credibility.
  • Pressure to buy and create a sense of urgency with phrases like “limited time only” or “few units available.”

Similar Posts:

Rate this post

Leave a Comment

Share to...