New Reddit CEO (and co-founder) Steve Huffman just outlined Reddit’s new set of guidelines for moderating content. In it, he describes Reddit’s stance on trying to find the balance between free speech and protecting its users.
Huffman begins by breaking down what he believes reddit’s purpose to be. He said:
Reddit is a place to have open and authentic discussions. The reason we’re careful to restrict speech is because people have more open and authentic discussions when they aren’t worried about the speech police knocking down their door. When our purpose comes into conflict with a policy, we make sure our purpose wins.
He goes on to explain that Reddit is fully conscious of the ways this free speech and open discussion can (and often does) silence others. It’s easy to ban the blatantly awful things (non-consensual pornography, for example), but where the problematic balancing act comes in is in trying to define a grey area. So, he outlined their new working policy as to what would get a community banned.
These types of content are prohibited [1]:
- Spam
- Anything illegal (i.e. things that are actually illegal, such as copyrighted material. Discussing illegal activities, such as drug use, is not illegal)
- Publication of someone’s private and confidential information
- Anything that incites harm or violence against an individual or group of people
- Anything that harasses, bullies, or abuses an individual or group of people (these behaviors intimidate others into silence)
- Sexually suggestive content featuring minors
There are other types of content that are specifically classified:
- Adult content must be flagged as NSFW (Not Safe For Work). Users must opt into seeing NSFW communities. This includes pornography, which is difficult to define, but you know it when you see it.
- Similar to NSFW, another type of content that is difficult to define, but you know it when you see it, is the content that violates a common sense of decency. This classification will require a login, must be opted into, will not appear in search results or public listings, and will generate no revenue for Reddit.
For a lot of people, this is a move in the right direction for Reddit. First of all, the doubling down on banning harassment is fantastic. Doxxing and encouraging harm on people? Yeah, kick that stuff out of here.
What’s helpful is a clear definition of what flies and doesn’t fly. It’s something that Reddit’s lacked for quite some time now. And to a point, Huffman is right: discussion blossoms when it isn’t stifled by micromanagement and helicopter moderating. Okay, fine.
In the comments, Huffman gives some examples of what’s bannable and what’s not. [Trigger warning for discussion of rape and hate speech.]
He said, “/r/rapingwomen will be banned. They are encouraging people to rape.” But he might be stretching it a bit when he continued, “/r/coontown will be reclassified. The content there is offensive to many, but does not violate our current rules for banning.”
*record scratch*
Oh. Okay. I mean, great! /r/rapingwomen sounds like a terrible, awful place. Good riddance. But the other one? The one that’s full of hate speech, slurs, and the gathered hate for an entire race? That’s fine? What.
It’s important to note that by “reclassified,” he means that subreddits like those will be opt-in only. You’ll have to log in and subscribe to those on your own. They also won’t be listed in search results, nor will they “generate revenue for Reddit.” But they’ll still exist, carrying on in their merry little hateful way.
These sentiments about Reddit’s revenue are echoed in Pao’s piece for the Washington Post where she explained that “to attract more mainstream audiences and bring in the big-budget advertisers, you must hide or remove the ugly.” She also added:
Expecting Internet platforms to eliminate hate and harassment is likely to disappoint. As the number of users climbs, community management becomes ever more difficult. If mistakes are made 0.01 percent of the time, that could mean tens of thousands of mistakes. And for a community looking for clear, evenly applied rules, mistakes are frustrating. They lead to a lack of trust. Turning to automation to enforce standards leads to a lack of human contact and understanding. No one has figured out the best place to draw the line between bad and ugly — or whether that line can support a viable business model.
Huffman shares the same sentiment in his announcement. It is difficult to tread this line. He said:
No company is perfect at addressing these hard issues. We’ve spent the last few days here discussing and agree that an approach like this allows us as a company to repudiate content we don’t want to associate with the business, but gives individuals freedom to consume it if they choose.
Reddit’s new strategy can be summed up as “contain and maintain,” but given the inevitable human error around containment of anything (has Jurassic Park taught you nothing?!), it won’t be long until Reddit is going to have too much on their hands. They’ve taken on the (again) Sisyphean task of trying to define and moderate the content of “the front page of the internet.”
Good luck with that one.
(via The Verge)
—Please make note of The Mary Sue’s general comment policy.—
Do you follow The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google +?
Published: Jul 16, 2015 05:14 pm