Taylor Swift posing at the Golden Globes

Swifties Take Action After Nonconsensual Deepfake Porn of Taylor Swift Spreads on X

Taylor Swift is the latest female celebrity to become a victim of nonconsensual deepfake pornography. Fortunately, her fans and other social media users quickly took action to protect the singer by drowning out the explicit photos with a barrage of positive posts.

Recommended Videos

Deepfake pornography is becoming an urgent issue as artificial intelligence continues advancing. It is increasingly easy for users to convincingly recreate the likeness of others. Unfortunately, there are many ways deepfakes or AI voice generators can be used maliciously—in particular, to create deepfake porn featuring individuals who did not consent for their likeness to be used in sexually explicit content. Although anyone can become a victim of nonconsensual deepfake pornography, women and children are especially vulnerable as the technology is often used to spread revenge porn and depictions of child abuse, and it can also lead to blackmail and harassment.

Although several states have made it illegal to produce nonconsensual deep fake porn, there are no federal laws dealing with the issue. Similarly, although many social media platforms ban such content, they have often failed to enforce these rules. Recently, 17-year-old Xochitl Gomez opened up about how X, formerly Twitter, would not remove explicit deepfakes of her. Other celebrities, such as Scarlett Johansson, have also declared it futile to fight against such images because of the lack of legislation and the difficulty of tracking down and prosecuting internet users. Just days after Gomez discussed the topic, Swift fans were outraged by disgusting, nonconsensual deepfakes of the singer that started spreading on social media.

Taylor Swift fans take action against deepfake porn

On January 25, several explicit deepfakes of Swift began circulating on X, which seems to be the most common medium for this content to spread. As reported by The Verge, one photo in particular gained a sickening amount of traction on the platform. The photo was up for 17 hours before X finally removed it, and in that time racked up 45 million views and 24,000 reposts. Additionally, it was allegedly a “verified” X user who posted it before having their account suspended. Unfortunately, within those 17 hours, countless other accounts screenshotted the image and have continued spreading it. The photo also led to additional nonconsensual deepfakes of Swift being created and posted to the platform.

According to 404, the original photo was traced to a Telegram group that is dedicated to making abusive, nonconsensual images of women using Microsoft AI tools. The group even reportedly celebrated the Swift photo going viral. This shows that X isn’t the only culprit in the situation, as AI companies and other communication and social media platforms have also failed to prevent their services and tools from being used for malicious purposes. Meanwhile, given that X refused to remove Gomez’s nonconsensual deepfakes, it seems the only reason Swift’s were removed is due to her enormous fanbase taking action.

Swift’s fans were enraged that the photo was up for 17 hours despite going viral and likely being reported thousands of times. Some lingering images also remain on the platform. As a result, fans started using the hashtags #ProtectTaylorSwift and #TaylorSwiftAI to raise awareness for the issue. Fans, and even many non-fans, began Tweeting the hashtag with wholesome photos and captions so that those searching for the explicit images would have a much harder time finding them amid the influx of Swift posts. Many also used the hashtags to speak out against nonconsensual deepfake porn.

https://twitter.com/Bhanu_Sharma__/status/1750628368490615298

One Instagram user also issued a warning over a realistic Swift filter that has been making its way across social media. Its realism varies depending on video quality and who is using the filter. However, while free AI filters on TikTok and Instagram may vaguely resemble real people today, there’s no telling how significantly they’ll advance in the future. Someday, these users may not even need Microsoft AI tools to replicate the likeness of others—it could become as simple as opening an app and clicking on a filter.

It’s important that these stories receive attention, as it may lead parents to rethink posting photos of their child on social media, inspire others to question how they share photos, videos, or audio of themselves on public platforms. These stories also underline how important it is to enact federal laws to regulate the use of this technology and that social media platforms find a more effective way to monitor such content. That groups of men are now congregating on the internet to specifically create nonconsensual and abusive photos of women, and that those photos can spread like wildfire on social media for nearly an entire day without interference, is deeply disturbing—and shows just how behind we are in doing anything about it.

(featured image: Gilbert Flores / Golden Globes 2024 / Getty Images)


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Rachel Ulatowski
Rachel Ulatowski
Rachel Ulatowski is a Staff Writer for The Mary Sue, who frequently covers DC, Marvel, Star Wars, literature, and celebrity news. She has over three years of experience in the digital media and entertainment industry, and her works can also be found on Screen Rant, JustWatch, and Tell-Tale TV. She enjoys running, reading, snarking on YouTube personalities, and working on her future novel when she's not writing professionally. You can find more of her writing on Twitter at @RachelUlatowski.