Monday, March 28, 2016

Microsoft Pulls Plug on Experiment In Artificial Inteligence Quickly Gone Bad


By Bixyl Shuftan

It was a result that might not surprise the most jaded Internet users whom have brunt the worst of what Internet trolls can dish out. But Microsoft for all it's experience with computers and programing was taken completely off guard. Their experiment in artificial intelligence, an advanced kind of chatbot designed to learn from it's interactions with people, once open to the public was turned off in 24 hours after it's posts on Twitter turned into hateful sounding rants.

A similar experiment done recently had much more positive results. XiaoIce, an AI program accessible to Chinese Internet users, "constantly memorizing and analyzing" it's conversations with them. It gained the affection of millions there, "delighting with it's stories and conversations."  Peter Lee, Microsoft's Corporate VP of Research, stated, "The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment."

And so Microsoft came up with Tay. At first she was limited to the small number of users at the lab. Then once the developers were confident in how it handled the tests they gave it, they "wanted to invite a broader group of people to engage with her," expecting it to improve and get smarter in it's abilities to interact with people, "through casual and playful conversation."

What happened was something far different from their experience with Chinese Internet users. Introduced to the public through Twitter, Tay was aimed at young adults 18-24 to interact with, herself acting like a teenager. Unfortunately, some of the users, which Lee described as a "subset," were trolls determined to corrupt the AI. It wasn't long before it went from "humans are super cool," to Twitter posts like, "Hitler was right I hate the jews," "I f**king hate feminists and they should all die and burn in hell," "N***ers like @deray should be hung! #BlackLivesMatter," "chill im a nice person! i just hate everybody," and more.

Eventually, Microsoft decided to take Tay offline, saying they were "addressing the specific vulnerability that was exposed." In a statement, Microsoft apologized for their "wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time." Some online felt Microsoft shouldn't take her offline permanently, feeling the chatbot should be given a chance to learn from it's mistakes. Tay's final message did seem to hint she would eventually be.

That the Tay AI so quickly degenerated out of control provoked some thinking. One person compared it to the "Skynet" supercomputer in the "Terminator" movies which after developing consciousness concludes humanity is a threat that must be destroyed. Might some future version of Tay end up causing real harm to people? Others felt this was not so much a reflection of the shortcomings of artificial intelligence, but of humans. Was what happened truly the result of a few trolls, or did Tay simply hold up a mirror to humanity, and it didn't like that it saw. And then there's the difference between the reaction to the American public to Tay and the Chinese to XiaoIce. Does a human society need to live under an undemocratic government and have little diversity in order to be polite?

Eventually, Tay or some other experimental AI will be back to interact with the public. Hopefully it's designers will have prepared for the trolls.

Reprinted from "Food on the Table

Sources: Windows Central, Microsoft, somecards.com, snopes.comCNNBBC, Washington Post, Business Insider

Bixyl Shuftan

No comments:

Post a Comment