Skip to main content

YouTube and Facebook Implement Automated Takedowns for “Extremist” Videos

Recommended Videos

Several internet video hosting behemoths have begun to automate the process of flagging “extremist” videos for automatic removal. Facebook and YouTube are among the websites that have initiated automated takedowns against videos depicting “extremist” political propaganda and/or videos that incite violence.

According to an in-depth report by Reuters, video-hosting sites like YouTube are under pressure from U.S. and European government leaders to remove “extremist” videos. Because governments are involved and because they are putting pressure on corporations to comply, the word “censorship” could conceivably be applied to this scenario. “Censorship” is such an overused and misused word online, so it’s pretty rare to run across a scenario where that word could be used accurately.

It all started last April, when a nongovernmental organization called the Counter Extremism Project put together a recommended set of guidelines for limiting “online radicalization” via propaganda videos. At that time, U.S. and European governments held a discussion of the content-blocking recommendations set forth by the CEP.

Since then, video-hosting companies have not publicly discussed whether or not they have implemented these recommendations, but according to Reuters, companies like YouTube and Facebook have been keeping a closer eye on “extremist” content since then.

The question of whether or not this qualifies as actual government censorship remains somewhat muddy. According to Reuters, video companies have refused to comment about their methodology or official reasoning for removing the videos, nor have they stated that any government ordered them to comply (getting “pressured” to comply isn’t quite the same as getting “ordered” to comply, I suppose). It’s also not clear how exactly the moderation system works, and to what extent human moderators are involved in the process of finding and removing these types of videos.

Historically, companies like YouTube and Facebook have used automation to flag videos containing violence and pornography, or lesser ills like copyright violations. This isn’t considered “censorship” because, in theory, it’s up to those websites to decide what is and isn’t appropriate to appear on their websites. Editorial guidelines aren’t the same as “censorship.”

This situation represents a much more complicated ethical question, however. At first, it seems like an easy question to answer: videos that incite violence and recruit terrorists seem to be unquestionably bad. But in the big picture, the idea of “extremism” can become pretty difficult to define. There are oppressive governments all over the world, and groups of people who fight against them. Are they terrorists, or freedom fighters? It depends who you ask, obviously.

That’s part of why the idea of automating this process seems less ideal than relying on human moderators. Although it’s a very stressful job to moderate videos that could contain violent content, it just doesn’t seem like something that should be left up to robots. It seems like the type of ethical quandary that requires some serious human thinking … at least until we invent a robot that is better at sorting all of this stuff out than we are, at which point we’ll have a whole new set of problems to worry about.

(via The Next Web, image via Niro/Flickr)

The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—

Follow The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google+.

Have a tip we should know? tips@themarysue.com

Author
Maddy Myers
Maddy Myers, journalist and arts critic, has written for the Boston Phoenix, Paste Magazine, MIT Technology Review, and tons more. She is a host on a videogame podcast called Isometric (relay.fm/isometric), and she plays the keytar in a band called the Robot Knights (robotknights.com).

Filed Under:

Follow The Mary Sue:

Exit mobile version