Brad Pitt has never maintained a personal account on any social media platform. For an extended period, this was merely an idiosyncrasy of one of the entertainment industry’s most guarded celebrities. Now, it serves as the focal point of a critical public alert — because fraudsters have been leveraging this void to catastrophic consequences, utilizing AI-created deepfakes of the Academy Award-winning actor to swindle victims out of massive sums of money, and the situation is deteriorating.
The alert from Pitt’s representatives became impossible to disregard in January 2025, following a French interior designer identified only as Anne who shared publicly a narrative that stunned the globe.
Anne, 53, had been approached on Instagram in early 2023 by a profile purporting to be Pitt’s mother, followed instantly by a fraudulent “Brad Pitt” profile inundating her with AI-created romantic communications, deepfake medical facility images, and pressing monetary appeals. Across 18 months, she ended her marriage and wired nearly her complete divorce settlement — €830,000, approximately $850,000 — to the criminals, who alleged Pitt was fighting kidney cancer and required her assistance. She only learned the reality when photographs of the actual Brad Pitt with his girlfriend, Ines de Ramon, appeared online. The psychological consequences were devastating. Anne reportedly attempted suicide three times.
Pitt’s representatives acted quickly with a public statement. “It’s awful that scammers take advantage of fans’ strong connection with celebrities,” a spokesperson said, delivering a direct warning: people should avoid responding to unsolicited online messages, “especially from actors who have no social media presence.” It was not the initial instance his team had been compelled to issue such an alert. An almost identical statement had been distributed months prior in September 2024, following two Spanish women who lost a combined €325,000 to fraudsters executing the same Brad Pitt impersonation playbook.
Despite those alerts, Pitt’s likeness continues to be one of the most exploited in the deepfake fraud industry. In late 2025, cybersecurity firm McAfee published its annual “Most Dangerous Celebrity: Deepfake Deception List,” and Brad Pitt was featured on the global ranking of the most abused celebrity likenesses by scammers. McAfee’s analysis, drawn from verified social activity, search data, and media reports across multiple countries, discovered that 72 percent of Americans had come across fake celebrity endorsements online, and that 10 percent of those who clicked on such content lost money. McAfee particularly referenced the Brad Pitt French woman case as a defining example of how these scams work — using parasocial emotional bonds that fans form with celebrities, then supercharging them with AI tools that make fake videos and voice messages feel heartbreakingly real.
When Anne was swindled in 2023 and 2024, the deepfakes employed were relatively primitive — static AI-created images altered with tools like Adobe Photoshop’s generative fill feature. Security firm isFake.ai cautioned in January 2026 that the technology has progressed substantially, referencing data from DeepStrike estimating that global deepfake production surpassed eight million files in 2025 — a sixteenfold increase since 2023. Scammers victimizing individuals with Brad Pitt’s likeness now possess access to replicated voice patterns, synthetic facial movements, and what isFake.ai calls “persona kits” — ready-made bundles of fake identities that require little technical skill to deploy. The firm observed that scams using celebrity likenesses like Pitt’s are now “more persistent and harder to disrupt” because they rely on AI systems that learn and adapt over weeks or months of contact with victims.
The pattern established by the Brad Pitt scams has become a blueprint. In a case in 2025 widely compared to Anne’s, a 66-year-old Southern California woman named Abigail Ruvalcaba lost her paid-off home, and more than $81,000 after scammers used AI deepfake video calls and cloned voice messages impersonating a television actor she recognized and trusted. Like Anne, Abigail was first contacted through Facebook, then moved to WhatsApp, where personalized deepfake video messages addressed her by name. “To me, it looks real, even now,” she said. The emotional manipulation, the isolation from family, the gradual financial drain — it was the Brad Pitt scam formula repeated.
Now the technology enabling those scams has taken another leap. In March 2026, Malwarebytes reported on a new industrial-scale operation in which scam compounds in Southeast Asia are hiring “AI models” — human operators using real-time deepfake face-swapping technology — to conduct live video calls with romance scam victims. Recruitment ads describe roles handling up to one hundred live deepfake video calls per day. Where Anne was deceived by static images and pre-recorded clips, today’s victims can find themselves on a live video call with someone who looks and sounds exactly like a celebrity they trust — with no way to detect the deception in real time.
Brad Pitt has no Instagram. No X account. No Facebook page. No TikTok. His representatives have said it repeatedly, and they are saying it again: any account online claiming to be Brad Pitt is not Brad Pitt. Any message from “Brad Pitt” asking for money — for medical bills, for customs fees, for any reason whatsoever — is a scam. The deepfakes will look real. The voice messages will sound real. The live video calls will feel real. That is precisely the point, and precisely the danger. As Pitt’s team warned from the beginning, the strongest protection available is the simplest: do not engage.

