This shouldn’t happen to anyone, ever. Certainly not over daring to star in the film reboot of a beloved franchise, but yesterday was the culmination of several weeks of racist abuse heaped on Ghostbusters star Leslie Jones for busting ghosts while black. Add that to an already too-long list.
Jones has been dealing with stuff like this for a long time, pretty much since the announcement that Ghostbusters existed. For weeks, she’s been standing up for herself, reporting to Twitter, and blocking people like a boss. But yesterday was the most targeted attack on Jones, and it proved to be too much even for her. Much of the targeting was thanks to the attentions of notorious Internet troll Milo Yiannopoulis, who not only flung racist and sexist garbage at her himself, but incited his followers to do the same, keeping it up after she blocked him:
Supporters (myself included) then took to reporting her abusers to Twitter on her behalf, and as reported by The Verge, Jones herself called upon Twitter to do something about the harassment and come out more strongly against hate speech. When Buzzfeed reached out to Twitter for a statement about what they planned to do, they first offered this bit of helpfulness:
“While we don’t comment on individual accounts, here’s an explainer on our content boundaries here: https://support.twitter.com/articles/18311“
Then released this rather lukewarm statement:
“This type of abusive behavior is not permitted on Twitter, and we’ve taken action on many of the accounts reported to us by both Leslie and others. We rely on people to report this type of behavior to us but we are continuing to invest heavily in improving our tools and enforcement systems to prevent this kind of abuse. We realize we still have a lot of work in front of us before Twitter is where it should be on how we handle these issues.”
The thing is, it’s not just about reporting, it’s about consequences. What actual consequences do violators of Twitter’s abuse policies face? Especially when trolls like this often have multiple accounts? Clearly, whatever consequences are currently in place are… inconsequential. Anne T. Donohue over at 29 Secrets put it this way:
Here’s where you can’t win them all, Twitter. Mainly: if you want people — real people — to feel safe on the Internet, you have to cut off the crew of racist/sexist/homophobic/transphobic/etc. monsters whose lot in life is to bring other people down. It means taking a user cut while you amputate the extension of the app that is poisonous, so the rest of us can survive. It means demanding accountability and it means refusing to tolerate even a second of abusive behaviour.
This is one part of the problem, for sure. Twitter needs to stop being daunted by those who would cry that any sort of banning or suspension amounts to a flouting of the First Amendment. You’re a private company, Twitter, not a government service or a public utility. So it’s up to you to decide who gets invited to this party in your home, and what kind of environment you want to create for users. I understand that moderating is a difficult job and there never seems to be enough people to handle the ever-increasing trollitude. Believe me, we here at TMS totally understand.
What I don’t understand is how the results vary so wildly when it comes to consequences. Obviously, individual Twitter moderators have different perspectives and tolerance levels for bullshit. However, there are some things that are very clearly sexist and racist. Things that, if there’s a company policy against them, all employees should be on the same page about. Just in having a casual conversation about it with the rest of the TMS staff, we thought of things that Twitter could be doing, but aren’t doing already, or at least not as effectively as they could be.
Twitter could be making better use of IP information, for example. Like, putting temporary holds on certain IP addresses to keep them from creating accounts for a set period of time, and everyone within that address will just think that “Twitter’s down for maintenance,” or something. Or, putting temporary bans on brand new accounts that suspiciously maintain the profile picture of the Twitter Egg, and are being flagged for sending harassing tweets with a certain frequency from the moment they’re created.
What’s messed up is that Jones not only dealt with abuse leveled at her own account, but someone was also able to Photoshop an image using her name and profile photo (that people fell for, despite the fact that it didn’t depict @lesdoggg as a verified account) and post horrible things in her name! Seriously, Twitter, what even is the point of a verified account if not to safeguard against potential impersonators? It’s not as if we don’t have face recognition software and the ability to create algorithms to recognize patterns; it seems like Twitter needs more tools to crack down on the various methods used to impersonate accounts. Who is verifying the verified accounts at Twitter, and what is being done to keep these accounts safe?
Verified accounts on Twitter do have access to a “quality filter” that is intended to help cut back on harassment, but as stated it’s only for verified users, and it’s also not clear whether or not it was effective against the onslaught that Leslie Jones received. Last night, Twitter CEO Jack Dorsey invited Leslie Jones to direct-message him, most likely to discuss the “quality filter” and/or other potential options for stemming the tide of abuse. Given that this exchange between them was shortly followed by Leslie Jones dealing with the Photoshop impersonation and then her weary farewell on Twitter late last night, it doesn’t seem likely that her conversation with Dorsey went well.
There’s another problem at work here, too. The seeming attitude that the only kinds of abuse that really matter are the ones that threaten physical violence, or encourage someone to harm themselves. When you look at their guidelines above, they seem very focused on things that escalate to violence. Yes, those are abusive, don’t get me wrong, but abuse shouldn’t get to the level of physical violence to be considered abuse. There are plenty of abusive relationships, for example, that are verbally and mentally/emotionally abusive. Just as we want to support the victims of those relationships, so too should we be supporting those who suffer consistent and targeted emotional and mental abuse online, because that online abuse has real-world consequences for the person on the receiving end.
Racist, sexist, homo/trans phobic, and ableist speech — hate speech — is abusive all on its own, without the additional threat of physical violence. No matter how Twitter chooses to punish those who violate their terms, the terms themselves need to include this basic principle and have this way of thinking as their foundation.
And yes, we too have #LoveForLeslieJ. I hope that this experience doesn’t make Jones feel she has to leave Twitter permanently, because she’s one of the bright lights in my Twitter feed. But if she does, I’d understand.
—The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—