Twitter Caught Up in “Censorship” Debate Again Over Moderating President Obama’s Live Q&A

This article is over 8 years old and may contain outdated information

Recommended Videos

Twitter’s harassment problem has become such an inextricable part of the discussion surrounding the service, and yet today’s latest news still has managed to incite a whole new shakeup surrounding institutional intervention in online harassment. Buzzfeed reports that during President Obama’s live Twitter Q&A in May of 2015, then-CEO of Twitter Dick Costolo ordered employees to deploy an algorithm designed to filter out abusive and harassing replies that could end up getting sent to the President.

According to the sources who spoke to Buzzfeed, Twitter used both an algorithm as well as manual deletion to ensure that the tweets sent to the President would not contain anything objectionable. Sources also stated that Costolo’s decision to remove these tweets was kept secret from certain senior employees at Twitter who would have objected if they had known about the practice at the time, given Twitter’s historical devotion towards the ideals of “free speech.”

Another source told Buzzfeed that similar precautions had been taken during a Q&A with Caitlin Jenner. The source explained, “This was another example of trying to woo celebs and show that you can have civilized conversations without the hate even if you’re a high-profile person. But it’s another example of a double standard—we’ll protect our celebrities, while the average user is out there subject to all kinds of horrible things.”

This “source” is not alone in their judgment of Twitter with regard to prioritizing accounts with a “Verified” checkmark over the average user. I’ve seen many folks responding to this story already with similar sentiments, arguing that the protections that were offered to the President at his live Q&A should be optional to use for everyone on Twitter. On the other hand, there’s the argument over whether or not this “censorship” constitutes inappropriate overreach on the part of Twitter.

It is odd that Costolo felt the need to implement this algorithm in secret at the time, and it’s definitely a sign of the internal struggles that Twitter must be facing as it navigates the ongoing bad press about the harassment that happens on their platform. Buzzfeed also currently has a much longer article up now to serve as a companion to this news about Obama’s Q&A; the longer piece details the fact that Twitter has had a harassment problem ever since the very beginning of the service.

Online abuse is certainly a popular news item to discuss these days, but it’s not a new problem—and even though the problem has been more visible in the news cycle since 2014, surveys show that it’s not actually getting better. It’s still just as bad now as it was two years ago, and that’s in spite of social media platforms attempting to introduce better harassment reporting features, “quality filters” for Verified users, and the like.

When I read this story about President Obama’s live Q&A, I found myself agreeing with Costolo’s decision, although I can’t say for sure whether I would agree with how he implemented it, if I knew all of the details, which I don’t. I can imagine what was going through Costolo’s mind leading up to that decision, however, because I use Twitter, and I know how many racists and bigots use Twitter, and I know how many of them hate the President of the United States. Is it “censorship” for Costolo to use an algorithm to ensure that the President doesn’t have to see the n-word popping up in his mentions column while he’s trying to keep his composure and host a massive public event? Or is that just a sign of Costolo being a polite host? It’s not like President Obama would be expected to answer every single question that appears during a Q&A anyway.

It’s not clear from Buzzfeed’s coverage whether Obama requested this algorithm to be used, or whether Costolo implemented it himself without asking the President’s team about it, but it sounds from the report as though it was all Costolo’s idea. This leads to the larger question about whether Twitter’s “censorship” of hate speech and abuse still counts as “censorship,” and this is clearly a debate that has been going on at Twitter since its early days, with no end in sight.

I think one way to solve the problem is to give users the option to implement these services, or not. As I’ve explained, Verified Twitter users have access to a “quality filter,” which is probably similar to the algorithm that got used in Obama’s event, although Obama also had the added luxury of Twitter employees manually screening his replies ahead of time and filtering out any bigots who made it past the algorithm. The quality filter for Verified users has an on/off toggle, so it’s optional. It doesn’t catch everything, but it’s something, and it’s an option that I believe should be available to everybody on the service. Also, I believe Twitter should continue to iterate upon the concept of the “quality filter” and let users make suggestions and help test the feature to make sure it works as desired.

This seems like a great compromise for everyone, since it doesn’t involve Twitter deleting anybody’s tweets or ruining “free speech” necessarily. The tweets would all stay up, but they’d get filtered away from sight by people who don’t want to see hate speech or harassing messages. The people who do want to see these messages can still find them. Twitter gets to have their cake and eat it too. When combined with the tools that already exist on the platform, like muting and blocking and harassment reporting, the quality filter would be just another available feature that could help the service feel less hostile.

So why is it only available to Verified users? Why does the usage of this “quality filter” only become available when celebrities are involved? Why isn’t Twitter giving users more options to control what they see online?

I think people should have more granular control over what they do and don’t interact with online, not because I’m advocating an “echo chamber” (sigh), but because I always think people should get to opt in to what they want to see as opposed to having it forced on them. I’m all for seeking out alternate perspectives, and I’m fine with reading opinions from people who don’t agree with me, but I don’t benefit from seeing hateful slurs aimed towards me. I don’t think anyone does. It doesn’t make me a better writer, and it doesn’t teach me anything new. If anything, it makes it less likely that I’ll be able to hear legitimate criticisms of my work, because I’ll be too busy combing through the illegitimate ones to find the important, valuable ones.

I hope that Twitter also can manage to take my own critiques to heart, but I imagine that it will be difficult for them to find it, because they will also be inundated with angry replies from bigots who claim to care about “free speech” but actually just want to ensure that their targets will see their racial epithets. If only there were something that could be done about that.

(via Engadget, image via Wikipedia Commons)

Want more stories like this? Become a subscriber and support the site!

—The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—

Follow The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google+.


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Maddy Myers
Maddy Myers
Maddy Myers, journalist and arts critic, has written for the Boston Phoenix, Paste Magazine, MIT Technology Review, and tons more. She is a host on a videogame podcast called Isometric (relay.fm/isometric), and she plays the keytar in a band called the Robot Knights (robotknights.com).