Deepfake images of Taylor Swift, featuring explicit and abusive content, have surfaced on various social media platforms, notably on X.
Swift’s fanbase, known as “Swifties,” has initiated a counteroffensive by using the #ProtectTaylorSwift hashtag on X (formerly Twitter), aiming to drown out the explicit images with positive ones and reporting accounts sharing the deepfakes.
Swift’s fans are known for rallying in support of the artist in situations of wrongdoing. The current deepfake pornography issue aligns with past challenges Swift has faced, including her 2017 lawsuit against a radio station DJ.
Reality Defender Tracks Proliferation of Deepfakes
The deepfake-detecting group Reality Defender observed a surge in nonconsensual explicit content featuring Swift, primarily on X. The images also spread to Meta-owned Facebook and other platforms.
Unfortunately, many had already reached millions of users before removal efforts. Researchers note a rise in explicit deepfakes in recent years due to the increased accessibility of technology.
The AI-generated images, often weaponized against women, have targeted Hollywood actors and K-pop singers. Legal experts suggest pursuing legal action against the perpetrators could have significant implications.
X and Meta Respond to the Incident
X, in response to inquiries about the fake images, directed attention to its safety account, emphasizing its strict prohibition of non-consensual nude image sharing. Meta issued a statement condemning the content and pledged to actively monitor and remove violating material.
AI providers, including OpenAI and Microsoft, expressed commitment to preventing the misuse of their technology. Microsoft, in particular, acknowledged an ongoing investigation into the potential misuse of its image-generator.
Microsoft CEO Satya Nadella’s Call for Guardrails
Satya Nadella, Microsoft CEO, acknowledges the alarming nature of the explicit AI-generated images of Taylor Swift and calls for global, societal norms and guardrails on AI and technology. He emphasizes the responsibility to manage emerging technology for safer content production.
Federal lawmakers, including Rep. Yvette D. Clarke and Rep. Joe Morelle, have introduced bills to address the issue of deepfake porn, emphasizing the need for better protections against non-consensual deepfake content.
The Biden administration deems the circulation of explicit AI-generated images alarming and urges social media companies to play a crucial role in enforcing rules against the spread of misinformation and non-consensual intimate imagery.
In an interview with NBC News’ Lester Holt, Microsoft CEO Satya Nadella emphasized the importance of managing emerging technology, stating,
I’d say two things. One is, again, I go back to, I think … what is our responsibility? Which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced, and there’s a lot to be done there and not being done there.
But it is about global, societal … convergence on certain norms … especially when you have law and law enforcement and tech platforms that can come together. I think we can govern a lot more than we think we give ourselves credit for.