On May 12, 2025, Oscar-winning actress Jamie Lee Curtis took a bold stand against the misuse of artificial intelligence by publicly calling out Meta CEO Mark Zuckerberg on Instagram. Curtis demanded the removal of a deceptive AI-generated advertisement that used her likeness without consent, highlighting the growing concern over deepfake technology and its implications for personal integrity and public trust.
The ad in question repurposed footage from a legitimate interview Curtis conducted with MSNBC’s Stephanie Ruhle, manipulating her voice and image to falsely promote a dental product. Despite efforts through official channels, including cease-and-desist letters and outreach by her legal team, the ad remained active on Meta’s platforms for over a month. Frustrated by the lack of response, Curtis turned to her Instagram account, which boasts over 6 million followers, to directly address Zuckerberg.
In her post, Curtis expressed her dismay: “It’s come to this @zuck. Hi. We have never met. My name is Jamie Lee Curtis, and I have gone through every proper channel to ask you and your team to take down this totally AI fake commercial for some bulls*** that I didn’t authorize, agree to, or endorse.” She emphasized the misuse of her image, stating that it “diminishes my opportunities to actually speak my truth.”
Curtis’s public appeal quickly gained traction, garnering widespread support and media attention. Within hours, Meta responded by removing the offending advertisement. A company spokesperson confirmed that the ad violated Meta’s policies against fraud and deceptive practices.
This incident underscores the challenges posed by AI-generated content, particularly deepfakes, which can be used to spread misinformation and exploit individuals’ likenesses without consent. Curtis’s experience is not isolated; other celebrities, including Tom Hanks and Scarlett Johansson, have also been targeted by unauthorized AI-generated content.
The rapid advancement of AI technology has outpaced the development of regulatory frameworks and content moderation tools, leaving individuals vulnerable to exploitation. Curtis’s case highlights the need for more robust safeguards and accountability measures to protect against the misuse of AI-generated content.
In the wake of the incident, Curtis expressed her hope that her actions would lead to greater awareness and prompt platforms like Meta to implement more effective measures against deepfake content. She stated, “If my brand is authenticity, you’re co-opting my brand for nefarious gains in the future.”
As AI technology continues to evolve, the importance of ethical considerations and protective measures becomes increasingly critical. Curtis’s proactive stance serves as a reminder of the potential risks associated with AI and the responsibility of tech companies to safeguard users against misuse.
In conclusion, Jamie Lee Curtis’s confrontation with Meta over a deceptive AI-generated ad underscores the urgent need for comprehensive policies and technologies to address the challenges posed by deepfakes. Her actions have sparked a broader conversation about the ethical use of AI and the protection of individual rights in the digital age.