Youtube Logo

YouTube’s Crackdown on Hateful Content Is a Backfiring Mess

The first thing a young websurfer learns when they venture out into the Great Digital Yonder is that the internet, like nature itself, is a brutal place.

Recommended Videos

There are many dangerous creatures that roam the 5G Frontier. Incels skulking in their forum caves. Anti-vaxxers screaming from mountain tops to anyone who will listen. Spam spots and their digital mating calls that are only designed for separating you from your money.

But the oldest and most dangerous foe of all is the troll.

Trolls have been around since the Dialup Age of the Internet. They haunt comment sections of the internet far and wide! Some might even be in the comments section of this article right now! They are countless in number, and they come in all shapes and sizes. Some say hateful and bigoted things truly because they believe them, some because they think it will help them amass online resources in the form of likes or shares, but others do it for a far more insidious reason … “the lolz”.

YouTube has tried valiantly to fight the troll menace for decades but to no avail. Whenever one troll account is deleted, two more pop up to take its place. Some believe that trolls are like a Greek hydra, or a hive-mind organism whose collective purpose is spreading misery and woe. As a last-ditch effort to defeat the troll menace, YouTube created an algorithm that hunts not the trolls, but the things that they like to say.

But it isn’t working. And now, innocent accounts are feeling the effect.

YouTube’s fatal flaw

In its effort to crack down on perceived negative content, YouTube created an algorithm that “listens” for hateful, disturbing, or pornographic speech. The no-no words? Obviously every slur under the sun, hardly a problem there. However, words like “suicide” or “sexual assault” have also been censored. Even more “adult” but innocuous words like “vibrator” are subject to censorship. Obviously, the goal was to remove content created by bad actors that violates YouTube’s policies, but many innocent creators have also been caught in the crossfire.

In his video “How To Kill A YouTube Video,” middle school teacher and content creator Mr. Beat elaborates on how YouTube’s algorithm makes his job, educating kids, that much harder. One of his recent videos entitled “Every President’s Biggest Mistake” was age-gated for mentioning no-no subjects such as genocide and election denial. Obviously, these are two bad things. History is full of bad things. Students of history have a responsibility to learn about these bad things in order to make the future a better place. The problem is, because YouTube’s algorithm “heard” no-no words without understanding them, the video was deemed unsuitable for the viewership of the people who need to watch it most.

While this obviously brings up educational concerns, it also brings up financial ones. Mr. Beat spends countless hours creating and shooting content that attracts viewers to both his account and YouTube as a whole. Naturally, this makes money. Mr. Beat makes money based on the time people spend watching his content, and YouTube takes a cut of that money for giving him a platform. The problem is that if YouTube censors Mr. Beat’s videos that are made directly for his target audience, they’re directly interfering with his ability to be paid fairly.

While smaller content creators tend to suffer the worst effects of the algorithm, bigger “moneymaker” creators and channels are allowed far more wiggle room. YouTube is much less likely to censor or age-gate a video made by TED or PBS. Yes, TED and PBS are far less likely to publish hate speech than an unaffiliated independent creator might be. However, YouTube’s approach appears to be overcautious, and because of this, independent creators are taking the brunt of the unwarranted censorship.

This problem isn’t unique to YouTube. TikTok is known to completely delete entire videos that trip the algorithm alarm. An internet troll suggesting someone should engage in self-harm and a mental health professional talking about suicide prevention techniques are obviously not the same thing, but to the algorithm, they may as well be.

What is “algospeak”?

Because of this crackdown on language, a new form of internet jargon has evolved. “Algospeak”—like the number and character-based language 1337 5p34k (leetspeak)—has begun spreading around the world wide web. In the New Internet Age, a person considering suicide is contemplating “unaliving” themselves. A person who has experienced sexual assault is an “SA” survivor. And what about people who use vibrators? I think you mean “spicy eggplants.”

While some of these terms sound charming or endearing, the reason behind their usage is not. In a world where content creators are already fighting tooth and nail to get their voices heard, these algorithms are throwing wrench after wrench in the gears. Even videos discussing universal topics such as the COVID-19 pandemic have been vulnerable to censorship, and internet dwellers so began to call the pandemic “The Backstreet Boys Reunion Tour” or “the panini.”

Ironically enough, algospeak is most common in the toxic online communities that these algorithms were designed to fight. Last year, anti-vaccine groups began to change their names to “Dance Party” or “Dinner Party” in order to avoid detection. Similarly, anti-vaxxers began to refer to vaccinated people with the code word “swimmers.” In more recent events, toxic supporters of Harry Potter author and prominent transphobe J.K. Rowling began flooding the author’s critics with messages that read “avada kedavara”. The phrase is Aramaic and means “I will destroy as I speak,” and Harry Potter readers know that the phrase is used as an incantation to cast the Killing Curse. Trans YouTuber Jesse Earl first noticed the trend in the comments section of some of her videos. When used in this manner, the phrase is a hate speech dog whistle and potential death threat. The YouTube algorithm has no idea.

Sadly, we are still far from the days of a troll-free internet. All of the methods we have used to try to contain the threat are failing, and what’s worse they are harming the content creators that YouTube and other tech companies are trying to protect. So far, only one Old World Internet adage has ever us done any good: “Don’t feed the trolls”. Without outrage to sustain them, many trolls starve to death. However, it’s the ones that keep on posting hate speech even when ignored that are the most dangerous.

(featured image: YouTube)


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Sarah Fimm
Sarah Fimm
Sarah Fimm (they/them) is actually nine choirs of biblically accurate angels crammed into one pair of $10 overalls. They have been writing articles for nerds on the internet for less than a year now. They really like anime. Like... REALLY like it. Like you know those annoying little kids that will only eat hotdogs and chicken fingers? They're like that... but with anime. It's starting to get sad.