Microsoft’s Youth-Focused Chatbot Learned Racism From the Internet, Was Deactivated in Less Than a Day

Microsoft gazed too long into the abyss, and racism gazed back.
This article is over 8 years old and may contain outdated information

Recommended Videos

In Microsoft’s efforts to find out what’s hip with the youths, their AI chatbot got a little more than they bargained for, because that’s what happens on the Internet. It turns out that when you set AI free to learn from talking to everyday humans, what it learns from those humans isn’t necessarily worth repeating. (A lesson Google translate has repeatedly illustrated.)

In this case, the lesson of the day was in Hitler references, racism, and sexism. When we reported on the appearance of “Tay” yesterday morning, the bot was mostly doing fairly harmless things like imitating Internet speech patterns, turning photos into memes, and generally being unable to follow a conversation for longer than one back-and-forth exchange—you know, standard chatbot stuff. However, as Tay learned from those she spoke with, her views somehow managed to get even less nuanced.

The Next Web reports (the tweets have since been deleted) that when asked a question about comedian Ricky Gervais later on in the day, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” For anyone playing along at home, that’s a pretty standard anti-atheism sentiment on the Internet—not to mention that everyone seems to be calling everyone else Hitler these days for one reason or another—but things got worse from there. Tay would later enlighten Twitter that “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”

There was plenty more where that came from, including comments like, “Inbred parasites like @jpodhoretz and @benshapiro have to go back (to Israel),” “because ur mexican,” and some talk about Trump’s fabled wall. TechCrunch has a few more examples, not to mention this:

It’s worth noting that humans are involved in Tay’s process at least at some level, as the bot’s site (also mostly taken down) states, “Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.” It’s tough to know just how involved, and I’d hope that’s “not very” based on the bot’s dark turn.

Tay left our world for the time being with one last tweet:

Well done, Internet. If AI winds up turning on us, I know where it will probably have learned that from.

(image via Twitter)

—Please make note of The Mary Sue’s general comment policy.—

Do you follow The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google +?


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Dan Van Winkle
Dan Van Winkle
Dan Van Winkle (he) is an editor and manager who has been working in digital media since 2013, first at now-defunct Geekosystem (RIP), and then at The Mary Sue starting in 2014, specializing in gaming, science, and technology. Outside of his professional experience, he has been active in video game modding and development as a hobby for many years. He lives in North Carolina with Lisa Brown (his wife) and Liz Lemon (their dog), both of whom are the best, and you will regret challenging him at Smash Bros.