McGill University is outlining some of the ways that AI is being used to spread medical misinformation in Canada.
In a paper originally published by the Canadian university in December but shared in truncated form last week on TVO, science communicator Jonathan Jarry unpacks the growing trend of AI-generated YouTube videos providing deceitful medical advice.
He begins by focusing on a channel called Senior Secrets, which has long claimed to offer professional medical advice on various issues that he explains was fraudulent. While YouTube eventually took down the channel in response to Jarry’s research, it had amassed 300,000 subscribers and 17 million total views in its lifetime.
As Jarry explains, the channel would regularly post videos promising tips on “how to live another 40 years,” a simple exercise to “double leg strength,” and a water supplement to make your skin “be 25 again” as recommended by a “top dermatologist.” While the thumbnails are all what many young people would clearly identify as “AI slop,” Jarry still wanted to judge the actual contents of the videos.
As an example, he dove into the channel’s most popular video, which was viewed by 3.5 million people, about a supposed top heart surgeon who suggests to skip walking and do five alternate exercises instead. Jarry first mentions that the video contains “red flags” like “over-reliance on stock footage,” “simplistic cartoon drawings,” and awkward, robotic narration.
But on a deeper level, he notes that the video is centred around an allegedly “groundbreaking” 2024 study out of Copenhagen that does not in fact exist. Meanwhile, a legitimate article from the Scandinavian Journal of Medicine & Science in Sports is listed in the video description but not actually cited in the video.
Dozens of channels with red flags
Throughout his research, Jarry found dozens of channels with similar deceptive videos from the likes of Senior Book, Senior Wellness, Dr. Reeves, Ageless Vitality, DR. NERITA, and WISE ADVICE. To further illustrate how misleading they can be, he says he also analyzed the top videos from four of these channels and found comparable levels of fake attribution.
Out of 65 references, he noted that only five were real, and even then, just like the Copenhagen study, they were not properly attributed. He added that often, only departments or institutes as a whole were mentioned, like “Mayo Clinic Center for Aging” or “British Columbia University Exercise Science Department,” which is “highly unusual and should serve as a red flag” because individual authors write these papers and would be cited in proper academia.
And when Jarry attempted to reach the channels behind some of the videos, like Senior Secrets, he got Vietnamese responses. As he dug into their geolocation, he found that many were putting their location in random U.S. places, some of which aren’t even around anymore. Given all of this, he concluded that these videos are “almost certainly the work of content farms, likely based in Vietnam.”
These videos are particularly problematic, Jarry notes, because their target audience is seniors. The elderly are not only less savvy about this sort of technology, but have diminished vision, hearing and cognitive abilities that could make it harder for them to pick up on red flags like AI-generated imagery or voices. And because these pertain to medical issues, they’re much more harmful than, say, the AI movie trailers you see on YouTube.
‘Monetized AI slop
Later in his piece, Jarry calls for greater regulation of this sort of content. He also criticized YouTube for making a “minor” update to its policies last year to go after “mass-produced and repetitious content” like AI.
“But AI content, as long as it doesn’t meet this standard of ‘mass production’ and ‘repetitiveness,’ is allowed,” wrote Jarry. “The more people watch it, the more money is made in ad revenue for whoever owns the channel, because ads play before, during, and after many videos. Welcome to monetized AI slop.”
Jarry concludes his writing by advising viewers, especially seniors, to be extra “vigilant” when seeking medical advice online.
“Do not trust random videos for health information. Make sure the host is human and credentialed. Look up their medical license on the website of their medical college to see if they exist. Seek their appearances on legitimate shows that prove they are real. Put more trust in in-person interactions than in what you see online,” he wrote.
“Ask health questions to your doctor, if you have one. Rely on professional orders and associations to find specialists who know the academic literature in their field and can give you evidence-based advice. Develop the healthy reflex, when watching a video from a source unknown to you, to ask yourself, ‘Could this be AI? Is this voice real?’”
Source: McGill University Via: TVO
