Yahoo’s Abuse-Detecting Algorithm Works 90% of the Time & It’s A “Major Step Forward” in Its Field

This article is over 8 years old and may contain outdated information

Recommended Videos

Yahoo’s news articles have plenty of unsavory comments, much like the rest of the internet, so the Yahoo team decided to use their comments section in order to develop an algorithm that could successfully identify the worst offenders. Their new abuse-detecting algorithm works 90 percent of the time, which they say makes it more effective than other organizations’ attempts at taking on similar feats, and described as a “major step forward” in the field. 90 percent does sound pretty good, I admit.

Wired reports that Yahoo is also “releasing the first publicly available curated database of online hate speech” as part of their project for combatting abuse. This means that other sites will be able to use Yahoo’s database of comments in order to design their own algorithms. Yahoo’s algorithm was developed based on machine learning and also on user-reported data about their comments sections.

The trickiest part of any comment-moderating algorithm is dealing with false positives. Many abuse-detecting algorithms look for specific words or phrases, like slurs or common insults, and automatically flag the comments for moderation. However, this results in comments getting flagged even if they make reference to a slur in the context of saying it’s not appropriate, for example, or if the comment is a sarcastic imitation of a troll. Yahoo’s algorithm apparently can detect certain speech patterns, and it’s designed to be able to tell the difference between jokey sarcasm and actual abuse. (Of course, if your “hilarious” comment is indistinguishable from actual abuse, then I’m pretty sure the algorithm will still flag it, but I can’t say for sure how that part of the AI works.)

Yahoo enlisted trained comment moderators to help perfect the algorithm, and they also paid some untrained moderators, and they found that the trained moderators were a whole lot better at figuring out which comments were appropriate (kind of a no brainer, but hey). Over the course of creating the AI, Yahoo found that the work of these trained human moderators was essential for maintaining the algorithm’s efficacy and perfecting its detection techniques.

Of course, algorithms still have biases, which would necessitate a (hopefully diverse) team of human moderators to continue to iterate upon the methodologies used to classify comments. Sounds like a tough job, but at least Yahoo now understands the importance of training and valuing the roles of both their well-trained human moderators and their new AI colleague.

(via The Next Web, image via Michael Cordedda/Flickr)

The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—

Follow The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google+.


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Maddy Myers
Maddy Myers
Maddy Myers, journalist and arts critic, has written for the Boston Phoenix, Paste Magazine, MIT Technology Review, and tons more. She is a host on a videogame podcast called Isometric (relay.fm/isometric), and she plays the keytar in a band called the Robot Knights (robotknights.com).