Microsoft’s Youth-Focused Chatbot Learned Racism From the Internet, Was Deactivated in Less Than a Day
Microsoft gazed too long into the abyss, and racism gazed back.
In Microsoft’s efforts to find out what’s hip with the youths, their AI chatbot got a little more than they bargained for, because that’s what happens on the Internet. It turns out that when you set AI free to learn from talking to everyday humans, what it learns from those humans isn’t necessarily worth repeating. (A lesson Google translate has repeatedly illustrated.)
In this case, the lesson of the day was in Hitler references, racism, and sexism. When we reported on the appearance of “Tay” yesterday morning, the bot was mostly doing fairly harmless things like imitating Internet speech patterns, turning photos into memes, and generally being unable to follow a conversation for longer than one back-and-forth exchange—you know, standard chatbot stuff. However, as Tay learned from those she spoke with, her views somehow managed to get even less nuanced.
The Next Web reports (the tweets have since been deleted) that when asked a question about comedian Ricky Gervais later on in the day, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” For anyone playing along at home, that’s a pretty standard anti-atheism sentiment on the Internet—not to mention that everyone seems to be calling everyone else Hitler these days for one reason or another—but things got worse from there. Tay would later enlighten Twitter that “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”
There was plenty more where that came from, including comments like, “Inbred parasites like @jpodhoretz and @benshapiro have to go back (to Israel),” “because ur mexican,” and some talk about Trump’s fabled wall. TechCrunch has a few more examples, not to mention this:
Wow it only took them hours to ruin this bot for me.
This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V
— linkedin park (@UnburntWitch) March 24, 2016
It’s worth noting that humans are involved in Tay’s process at least at some level, as the bot’s site (also mostly taken down) states, “Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.” It’s tough to know just how involved, and I’d hope that’s “not very” based on the bot’s dark turn.
Tay left our world for the time being with one last tweet:
c u soon humans need sleep now so many conversations today thx💖
— TayTweets (@TayandYou) March 24, 2016
Well done, Internet. If AI winds up turning on us, I know where it will probably have learned that from.
(image via Twitter)
—Please make note of The Mary Sue’s general comment policy.—