Skip to main content

It’s Instagram That’s Enabling Child Exploitation—Not Drag Queens or Trans People

Instagram and Facebook app icons on a phone screen

Conservatives across the country are successfully enacting harmful anti-trans and LGBTQ legislation under the guise of protecting children, a bad-faith argument made all the more shameless by the GOP’s refusal to meaningfully engage with actual threats to child welfare. While right-wing figures obsess over drag brunches and who should be allowed to use a f—ing bathroom, children continue to be harmed in ways that are entirely preventable, in forums that are very public—like Instagram, which has allowed underage sexual exploitation and pedophile networks to proliferate on its platform. Even worse: Instagram’s prized algorithm is actively promoting predatory accounts.

Recommended Videos

How Instagram enables child exploitation

The Wall Street Journal published a damning exposé on Instagram and parent company Meta, which has failed to combat what researchers describe as “a vast network of accounts openly devoted to the commission and purchase of underage-sex content.” Working with researchers from Stanford University and the University of Massachusetts Amherst, WSJ found that Instagram’s algorithm helps predators connect with one another by promoting related accounts and hashtags through its recommendation system.

When researchers looked up blatantly explicit hashtags such as #preteensex (a comparatively tame example), they were met with a familiar Instagram pop-up that cautioned, “These results may contain images of child sexual abuse. Child sexual abuse or viewing imagery of children can lead to imprisonment and other severe personal consequences.” Following some boilerplate language about harm and resources, Instagram’s pop-up offered two options: “Get resources” or “See results anyway.” Instagram says it has now removed the “see results” option from the pop-up message, a similar version of which is displayed to warn users of potentially explicit adult content.

Through its investigation, WSJ and researchers found over 400 accounts offering materials depicting the sexual exploitation of children; a little over one-fourth of those accounts had a combined 22,000 followers:

The Stanford Internet Observatory used hashtags associated with underage sex to find 405 sellers of what researchers labeled “self-generated” child-sex material—or accounts purportedly run by children themselves, some saying they were as young as 12. According to data gathered via Maltego, a network mapping software, 112 of those seller accounts collectively had 22,000 unique followers.

In response to the WSJ‘s questions, Meta “acknowledged problems within its enforcement operations and said it has set up an internal task force to address the issues raised.” Meta claims to have eliminated 27 pedophile networks in the past two years and, following the WSJ investigation, says it “blocked thousands of hashtags that sexualize children, some with millions of posts.”

Alex Stamos is the head of the Stanford Internet Observatory and previously served as the chief security officer for Meta. Speaking with WSJ, Stamos pointed out a glaring problem with Instagram’s seeming inability to combat this widespread issue. “That a team of three academics with limited access could find such a huge network should set off alarms at Meta,” which—given its technology and access to private user data—should have a much easier time identifying and eliminating accounts that trade in images of child exploitation.

The trouble with automating content moderation

Meta’s content moderation practices have come under scrutiny in recent years: In 2019, The Verge published a harrowing report on the lives of content moderators, who were employed not by Facebook, but through a third-party vendor. Underpaid and overworked moderators were exposed to a daily barrage of violent, racist, and horrific content under excessively micromanaged conditions. Many workers reported severe mental health outcomes, including anxiety and symptoms of post-traumatic stress disorder. In 2020, Facebook paid out $52 million to moderators in a class-action settlement.

Rather than invest in the hiring, training, and holistic support of content moderators, Meta outsourced to companies in Kenya and began using A.I. As of 2021, Facebook relied on A.I. to remove 95% of posts featuring hate speech. It’s unclear how heavily Meta is currently relying on A.I. to moderate content on Instagram, but two things are clear: First, the current system is ineffective at best. A Meta spokesman admitted to the WSJ that the company has failed to act on reported accounts and content. Meanwhile, an internal review revealed that a software glitch was “preventing a substantial portion of user reports from being processed,” and that “moderation staff wasn’t properly enforcing the platform’s rules.”

The other issue is one that plagues even the National Center for Missing & Exploited Children, which works with law enforcement to combat child sexual abuse and exploitation. The non-profit organization works from a database of “digital fingerprints” taken from existing images and videos and shared with internet providers. These fingerprints allow companies to detect child sexual abuse materials (CSAM) in media uploaded to their platforms and report such material to NCMEC, as required by federal law. NCMEC received 31.9 million reports in 2022. Per WSJ, 85% of those reports came from Meta and about 5 million reports came from Instagram.

But those reports are based on images and videos that are already in the center’s database. Newly created images and videos, on the other hand, have to be detected by individual internet companies and their content moderation processes. When those processes are largely automated and rely heavily on user reporting and algorithms to detect existing media, it’s much easier for abusive materials and predatory networks to proliferate.

Hiding in plain sight

A third issue arises from an unexpected source: parents. The Wall Street Journal spoke with Sarah Adams, whose popular Instagram and TikTok videos highlight child exploitation on social media. The mother of two cautions parents about the potential dangers of posting photos and videos of their children online. Adams also has an entire series of videos devoted to the ways in which some parents exploit their own children, and primarily young girls, by selling photos, videos, and Cameos of them to online buyers—most of whom are adult males. As Adams explains, these “mommy-run” or “parent-run” accounts offer subscriptions and post inappropriate photos and videos of their young daughters. Some of them offer to fulfill commissions for specific content. Many of the girls are dancers, gymnasts, cheerleaders, and pageant competitors.

Adams often illustrates how easy it is for users to find exploitative content and connect with similar accounts. She offered a troubling example to WSJ:

Given her focus, Adams’ followers sometimes send her disturbing things they’ve encountered on the platform. In February, she said, one messaged her with an account branded with the term “incest toddlers.” 

Adams said she accessed the account—a collection of pro-incest memes with more than 10,000 followers—for only the few seconds that it took to report to Instagram, then tried to forget about it. But over the course of the next few days, she began hearing from horrified parents. When they looked at Adams’ Instagram profile, she said they were being recommended “incest toddlers” as a result of Adams’ contact with the account.

Users don’t even need to search for harmful content to find it. If someone you follow so much as clicks on a distressing hashtag or pauses on a harmful piece of media, however unwittingly, that content can and will be recommended to you.

Because the accounts in question are operated by adults, and the minors depicted are not nude or engaging in explicit acts, Instagram has little—if any—incentive to shut them down. By Meta’s own accounting to WSJ, the company took down 490,000 accounts in January for violating child safety policies. Meta did not clarify how many of those accounts were taken down via automated content moderation, nor did it reveal whether most—or all—of them were identified using NCMEC’s database.

Checking in with the conservative outrage

Conservatives, meanwhile, remain preoccupied with legislating LGBTQ people out of existence—or at least back into the shadows. Some, like Ted Cruz and Marjorie Taylor Green, tweeted about the WSJ exposé:

Given their party’s animosity toward Facebook and apathetic refusal to pursue reforms that would actually protect children (i.e., gun laws), it’s hard to read these statements as sincere. Browsing the top posts about Instagram on Twitter, you’ll find Trump supporters using the WSJ exposé to justify existing outrage at Meta for tightening its policies regarding hate speech in the wake of the January 6 insurrection. “They Ban people BASED ON POLITICAL BELIEFS and allow these SICKOS free access,” one user tweeted, along with a video in which he bemoans being banned from Facebook and Instagram for “being a patriot” and “just posting facts … about America and Donald Trump and how they stole our election.” There are no discussions about protecting children; only of pedophiles and PizzaGate, and of how the normalization of “deviance” on the left is to blame.

Children are most likely to be sexually abused and exploited by a family member or caregiver. Drag performers do not groom kids. Gay men do not prey on helpless children. Ask anyone who’s ever been a teen girl and most of us will tell you stories about the friend’s dad who tried to grope us on a car ride home; the older manager who pursued and harassed us at work; the youth pastor, the guy we met in a chat room, the “uncle” who was really our dad’s friend. They all got away with it. Many of them still do. Conservative parents follow the example set by their party. When the call is coming from inside the house, you can always just let it go to voicemail.

(featured image: Chesnot, Getty Images)

Have a tip we should know? [email protected]

Author
Britt Hayes
Britt Hayes (she/her) is an editor, writer, and recovering film critic with over a decade of experience. She has written for The A.V. Club, Birth.Movies.Death, and The Austin Chronicle, and is the former associate editor for ScreenCrush. Britt's work has also been published in Fangoria, TV Guide, and SXSWorld Magazine. She loves film, horror, exhaustively analyzing a theme, and casually dissociating. Her brain is a cursed tomb of pop culture knowledge.

Filed Under:

Follow The Mary Sue: