Microsoft's AI Chat Bot 'Tay' Shows People Will Corrupt Any Innocent Thing
So Microsoft has revealed an artificial intelligent chat bot named 'Tay,' which is designed to be down with the kids. You can tell she's targeted toward 18-24-year-old people with a bio like the following: "A.I fam from the internet that's got zero chill."
Now with stories like this, I'm usually expected to talk about how this is the beginning of Skynet (check out the rest of my semi-paranoid robotics stories). But at no point do they infer that we would be responsible for the robotic end of days. Well now, I'm here to say that will absolutely be the case, as it's clear we can't have anything without trying to corrupt it!
Tay began life on Twitter as a simple millennial and the team say she's always learning new phrases through experimental "conversational understanding." What that means is when people flooded tay with misogynistic and racist tweets, she started repeating these same sentiments, becoming the closest we could get to an AI version of Donald Trump.
What's more confusing (and makes me feel more comfortable with the intelligence level of AI) is how muddled her ideologies are. In the span of 15 hours, she both said feminism was a "cancer," and then went on to say "gender equality = feminism."
This turned out to not be just an experiment in robotic conversation analysis (which if you take into account the claim there was a "filter" for this kind of vitriol, you could say it kind of failed), but a test of human character. Tay is an echo chamber of the collective hate people see every day on social media and provides a harsh mirror for us to reflect upon.
It's a simple rule really, that will always happen: build something public-facing that is "innovative" in some way, expect people to try and break it.