Just a couple of days ago, Microsoft unveiled an AI chat bot called Tay. It was designed to research conversational understanding by actually communicating with people on the Internet via sites like Twitter, Facebook, and Snapchat. From the messages people sent it, the bot could learn their interests and respond accordingly.
@costanzaface The more Humans share with me the more I learn #WednesdayWisdom
— TayTweets (@TayandYou) March 24, 2016
Within 16 hours, Tay was taken off the Internet.
Why? Because of its ability to “learn” from conversations and respond in a similar fashion. Within those 16 hours, the online community sent the bot pro-Hitler, pro-Trump, anti-Semitic and anti-feminist messages. What did Tay do? It learnt from them and began to respond likewise.
But was Tay really learning?
I don’t think so. Its actions can be more properly described as “Monkey see, monkey do”. Tay’s tweets were just rephrased versions of what the online community was tweeting at it. Yes, learning how to repeat stuff takes some learning, but honestly, I think Tay just proved that today’s AI are the same as they always have been. You input information, and you receive an output.
In this case, the input was a string of racist tweets, and the output was a string of racist tweets.
Microsoft took Tay down to give it some upgrades. (They’re probably installing filters to teach it what bad language and racism is.)
I don’t know about you, but for me, the power of AI deep learning doesn’t seem so powerful anymore. Microsoft is no lightweight in the AI community, and seeing their test go wrong so fast, while funny, did point out some problems with AI.
For one, it appears that if AI is to get their deep learning from anywhere, it’s going to be from humans. Regular humans, like you and me. And humans have a tendency to do stupid things and act like trolls on the Internet. You can’t exactly blame Microsoft for having a racist AI, right? It wasn’t them who input the racism, after all. It was the online community. Microsoft just set up a perfect system of “garbage in, garbage out”.
Also, the Singularity— a hypothetical event when computers will be able to make self improvements— will take a lot longer to come around. (If the Singularity is even possible, that is. I personally don’t believe in it.) People have been abuzz recently as AIs have claimed victory after victory— Watson in Jeopardy!, AlphaGo in Go. But the setbacks, too, reveal that the age of Skynet is still over the horizon. Some of the best AI still cannot pass a science test written for 14-year-olds.
And of course, there’s Tay itself, the sixteen hour wonder that it was.
So how smart are computers, actually?
As smart as the user. The phrase “garbage in, garbage out” is still very applicable today. I mean, no matter how great your computer is, you still won’t get the next New York Times bestseller by mashing the keyboard.
Clearly, AI still has a long road ahead of it.
Leave a Reply