- Health
Beware of AI-generated doctors giving health advice on TikTok
Seeing is believing? Not anymore with the proliferation of genAI tools that allow anybody to create realistic videos of people and places that don’t even exist.
Multiple reports of AI-generated videos showing doctors dispensing often-questionable health advice began emerging as early as March 2025, when Rolling Stone shed light on such videos being used to sell unproven health supplements. In August 2025, CBS News reported that deepfakes of doctors were appearing on TikTok.
Fast forward to 2026 and not much appears to have changed, as these accounts continue to thrive on social media. In January, investigative media outlet Indicator found dozens of accounts posting videos of synthetic doctors, with the vast majority of identified accounts on Instagram, Facebook, and Threads.
Some of those accounts were found to be selling health products and e-books, suggesting that the actors behind them were motivated by financial gain. The Indicator report also underscored how health misinformation appeared more credible when delivered by a realistic-looking person dressed as a medical professional.
Moreover, this phenomenon is proliferating in multiple languages. Polish fact-checking group Demagog reported that posts with AI-generated doctor personas were used to sell a popular health supplement called shilajit. No credible scientific evidence has shown that shilajit is clinically useful for treating diseases.
More recently, Science Feedback published a report revealing how French-language YouTube channels impersonate medical professionals, sometimes using the names of genuine doctors, further misleading viewers. Some of these channels appeared to target the elderly, a group that some studies have reported as being more vulnerable to online misinformation.
In this report, we take a look at similar accounts on TikTok that post content in English. Unlike earlier reports, several of the accounts we found didn’t overtly engage in commercial activity, such as by promoting supplements. Our observations suggest that the motivation behind these accounts could be to gain a large following for later monetization.
How did we identify the accounts?
We were able to find accounts posting videos of AI-generated doctors through an initial search on TikTok with the term “doctor advice”. This allowed us to find relevant posts and the accounts that published those posts. From there, we used TikTok’s Suggested Accounts feature to find more similar accounts.
Using this method, we were able to identify 18 such accounts in just a few hours. Occasionally, we were able to identify linked social media accounts on other platforms by using reverse image searches of a TikTok account’s profile picture and username on Google (Figure 1).

Videos of impersonated doctors draw millions of views
Several accounts we found shared the same or similar profile pictures, similar user names, and similar video designs. These shared characteristics could hint at a common actor behind the scenes, although as Indicator pointed out in its report, bad actors“often copy scripts or rip entire videos from each other, so it’s unclear how many of the accounts might be run by the same person(s)”.

Analyses of the videos using Hive Moderation (a tool to detect AI-generated & deepfake content) generally indicated a high likelihood that AI was used to generate the speech in the video.



We were able to identify some of the people shown in these videos—they include weight loss doctor Garth Davis and orthopedic surgeons Paul Zalzal and Brad Weening, who host the podcast “Talking With Docs”. In fact, “Talking With Docs” was tipped off to the existence of such accounts, warning its followers on Facebook about deepfake videos of their hosts in January 2026.
These accounts generally didn’t label their content as being AI-generated, even though TikTok’s Community Guidelines require creators to label such content. The Community Guidelines also “prohibit content that can harmfully mislead or impersonate others”, a line that these videos have clearly crossed.
We reached out to TikTok for comment and will update this report if new information becomes available.
Apart from spreading health misinformation, another reason that these accounts are concerning is their ability to attract a large following. Most had several hundreds of thousands of followers, and the content they produced sometimes drew millions of views. For example, two of the most popular videos posted by the account @healthvibes888 racked up more than 14 million views, while the account @kellycruz_67 received more than seven million views for some of its videos.
Most of these accounts don’t engage in overt commercial activity, like selling supplements, but the large followings that they have attracted mean that they could be eligible for monetization through TikTok’s Creator Rewards Program.
To join the program, accounts must meet a few requirements to be eligible. One is that an account must have at least 10,000 followers and 100,000 video views in the last 30 days. Another is that the videos that it publishes must be more than one minute long.
Several of the accounts we found had crossed the minimum threshold of followers many times over, meaning that they could be eligible for monetization through this program. The videos that we analysed were also over a minute long.
An investigation by Spanish fact-checking organization Maldita on AI-generated videos of protests in Venezuela and Iran illustrated how producing these AI-generated videos can be lucrative. Although TikTok limits the program to creators in the United States, United Kingdom, Germany, Japan, South Korea, France, Mexico, and Brazil, the Maldita investigation reported that location information can be spoofed (likely by using a VPN), so the geographical limitation is unlikely to be an obstacle to bad actors.
Finally, the popularity of these videos mean that they’re shown to many users and likely attract many users to follow the accounts. Selling social media accounts with large numbers of followers can be profitable and these accounts are valuable to bad actors, since they can be repurposed to different ends, whether it’s to lure people into scams or to peddle influence online, as Indicator reported in February 2026.
Conclusion
While freely accessible genAI models have been lauded for democratizing access to genAI, it’s become all too clear that society is ill-equipped to reckon with the fallout when such tools are employed by bad actors, whether it’s for fomenting anti-immigrant sentiment or producing child sexual abuse material.
Our findings in this report show that health misinformation is circulating on TikTok under the guise of medical advice by experts who are simply AI-generated personas. These personas are bound by no code of ethics, don’t owe a duty of care to the people who see them, and can be made to say anything their creators want.
This is deeply problematic, since unsuspecting users are more likely to believe misinformation when it’s delivered by someone who appears to be an expert, making the misinformation more difficult to debunk. Inaccurate health misinformation can lead to physical harm, and in the long run, it can also damage the credibility of medical professionals as a whole, further eroding public trust in the medical profession.
Finally, these videos don’t only have the potential to cause physical harm. Bad actors can be encouraged to produce more of such videos simply as a means to an end: such content attracts many views and engagements, making it valuable for farming engagement and expanding audiences, creating accounts with wide reach that can then be repurposed for other problematic behaviours, including scams and influence operations.
