Twitter Determined to Win Us All Over By Banning 235,000 “Extremist” Users & Giving That “Quality Filter” to Everyone

This article is over 7 years old and may contain outdated information

Recommended Videos

Pigs have mastered the art of flight, the depths of Hell have cooled down to 32 degrees Fahrenheit, and Twitter has finally addressed some of the basic concerns that their users have been raising for years. First news of the day: the BBC reports that over the past year, Twitter has banned 235,000 users who they deemed to be spouting “extremist” viewpoints that violated their terms of service. Next up: Twitter put out an official blog post today saying that the “quality filter” that had previously only been available to “Verified” users has been extended to everyone.

Go ahead–log into your Twitter account, go to settings, check the notifications tab, and see if you can’t turn on that lovely filter. It should clean your mentions right on up, if it’s as good as I’ve heard from my Verified pals! Unfortunately, it hasn’t been rolled out for me yet, but I expect it’ll be there soon, since some of my not-Verified friends have told me it’s already showing up for them.

According to the BBC’s report about the banning of “extremists,” these new changes seem to dovetail with Twitter’s expansion of their focus on analyzing violent threats that appear on their service. In Twitter’s separate blog post about the banning of extremists, they explained, “We have expanded the teams that review reports around the clock, along with their tools and language capabilities. We also collaborate with other social platforms, sharing information and best practices for identifying terrorist content.”

Some say that this change still isn’t enough, however, and that more needs to be done by social media networks to prevent violent groups from organizing online. Nikita Malik, a senior researcher at an anti-extremist group called the Quilliam Foundation, told the BBC that she sees the bans as a “short term solution,” but went on to say that she and her colleagues often work with social media networks “to help them have a more pro-active role” in counteracting the formation and organization of hate groups and terrorist groups on their platforms.

These are complicated problems, which means they require multiple types of interlocking solutions. Twitter banning “extremist” accounts is one step, but it won’t prevent these users from finding other ways to organize. Twitter providing people more ways to control their own feeds is a great step, too, with the new “quality filter,” as well as another change announced that users on the Twitter mobile app will be able to adjust their mentions to only display replies from people they follow. (That feature was already available in Twitter’s web client, and it’s a blessing, but it wasn’t available on mobile until now.) However, as excited as I am to try out the “quality filter,” it doesn’t necessarily solve the systemic problems of Twitter that let harassment campaigns spread and grow with terrifying ease.

It’s great to see the service taking steps against extremist groups forming, since that’s a large-scale problem. Twitter, and other social networks, should obviously discourage the formation of hate groups on their services–but they also should consider small-scale problems too, like stalkers and predators. Twitter needs to provide better tools for individuals dealing with problems that seem small in comparison to, say, the formation of a terrorist network. A dedicated stalker can seriously impact someone’s life–even if that stalker is not part of any organized group and have few resources beyond an internet connection. A “quality filter” won’t do much in the face of a problem like that.

Twitter came under fire this week for censoring tweets about the Rio Olympics that violated copyright protections, but not taking any significant steps to decrease widespread harassment–such as, for example, the recent harassment against Olympian Gabby Douglas. This week has seen a spate of articles (most notably, this lengthy Buzzfeed piece about Twitter’s safety practices) that shine a spotlight on this problem and, in all likelihood, these articles played a role in motivating Twitter to take visible action steps today. It only took … a decade.

My theory? Twitter’s “quality filter” might not actually be ready for widespread use yet, but they probably rolled it out today in the hope of getting out from under the bad press they’ve been receiving all week long. Hopefully, the roll-out will allow them to further improve the tool and iterate upon it according to what users want. Also, I continue to be hopeful that their changes won’t stop here, and that Twitter will continue to listen to their users’ experiences–whether those users are mega-influential celebrities, or total newcomers to the service.

(via BBC, image via Flickr/C_osett)

Want more stories like this? Become a subscriber and support the site!

The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—

Follow The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google+.

The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Image of Maddy Myers
Maddy Myers
Maddy Myers, journalist and arts critic, has written for the Boston Phoenix, Paste Magazine, MIT Technology Review, and tons more. She is a host on a videogame podcast called Isometric (, and she plays the keytar in a band called the Robot Knights (