A YouTube Channel Fakes Avi Loeb Using AI
Generative AI has made impersonation alarmingly easy, enabling scammers to clone voices for phishing and spawning deepfakes of public figures. Even Harvard astronomer Avi Loeb, who has drawn wide attention for his controversial idea that interstellar object 3I/ATLAS could be an alien craft, has fallen prey to this trend.
A channel named “Dr. Avi Loeb” appears to imitate the scientist by using AI to replicate his likeness and voice, signaling that the topic has become a lucrative target for scammers due to its public interest.
Loeb confirmed to Futurism that the videos are fake and AI-generated, and he has reported them to YouTube. Unlike Loeb’s own cautious stance—suggesting 3I/ATLAS might be naturally occurring or could have technological origins—the videos on the channel are highly sensational, with titles such as “3I/ATLAS Is a PROBE — New Data Leaves No Doubt.”
In a note appended to his latest blog post about new Hubble Space Telescope images of 3I/ATLAS, Loeb warned about the broader implications of impersonation. He imagined videos featuring scientists who look and sound like real researchers but disseminate counterfactual information about 3I/ATLAS, and wondered how the public would discern credibility.
“This isn’t science fiction,” he wrote, describing the current reality. “Over the past two weeks, hundreds of fans notified me about a YouTube channel bearing my name, with AI-generated videos about 3I/ATLAS.” The videos also show telltale signs of manipulation, such as jerky movements and a clock in the background that appears to be frozen, which Loeb noted as clues that AI or synthetic rendering was involved.
Loeb said he holds the creators legally responsible for defamation and false content. He and his fans have filed numerous reports with YouTube, but, as he remarked, the platform has not taken decisive action yet.
The channel likely violates YouTube’s impersonation policy, which forbids content intended to impersonate a person or channel. The policy also warns that violations can result in channel or account termination. Futurism reached out to YouTube for comment but had not received a response at the time of publication.
One question Loeb raised is why someone would invest in such impersonation. He suggested two possibilities: monetization through advertising on a popular YouTube channel (he notes his own substantial readership on Medium.com) and the broader motive of spreading misinformation.
As of now, the impersonation channel—created in September—has attracted over 1.4 million views. If monetized, those views could translate into substantial revenue, potentially tens of thousands of dollars, depending on the payout rate.
Previously, the channel had been uploading Tagalog content beginning in November, featuring a man in a lab coat identified as “Dr. Ricardo Reyes” who offered health advice, such as natural diabetes treatments. This shows a broader pattern of AI-driven impersonation and content misuse.
This situation underscores a troubling new precedent: platforms have been slow to remove explicit AI impersonations, even as they arise.
Loeb concluded that we are entering a world where fake content can be generated with relative ease, raising serious questions about how to verify information online. In his view, science must distinguish factual physical reality from AI-generated falsehoods, which he calls a persistent challenge for contemporary scholarship and public discourse.