Advertisement

Microsoft’s artificial intelligence bot ‘Tay’ shut down after Twitter taught it to be racist

It appears Microsoft has silenced its artificial intelligence (AI) bot “Tay” – known on Twitter as the AI bot from “the Internet that’s got zero chill” – just 24 hours after it was launched. Screenshot/Microsoft

Microsoft has discovered the pitfalls of artificial intelligence the hard way.

It appears that the company has silenced its artificial intelligence (AI) bot “Tay” – known on Twitter as the AI bot from “the Internet that’s got zero chill” – just 24 hours after it was launched, thanks to Twitter users who managed to teach it to tweet racist remarks.

The AI chat bot was developed by Microsoft’s technology and research teams to conduct research on conversational understanding. Tay is targeted at users 18 to 24 years old (and trolls, evidently).

READ MORE: Google reportedly working on an artificially intelligent messaging app

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized to you,” said Microsoft.

Story continues below advertisement

And entertain she did.

Breaking news from Canada and around the world sent to your email, as it happens.

You see, the problem with AI technology that learns through conversation is that you can manipulate it by feeding it certain information.

Take for example the Twitter user who began teaching Tay about Donald Trump’s presidential campaign promises, like building a wall between the U.S. and Mexican border to keep out illegal immigrants.

“WE’RE GOING TO BUILD A WALL AND MEXICO IS GOING TO PAY FOR IT,” the official @TayandYou Twitter account tweeted in all caps Wednesday.

According to reports, the account sent tweets using racial slurs and even promoted white-supremacist agendas, all thanks to online trolls who baited Tay with tweets using similar language.

One of the most talked about tweets read, “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”

Another read, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

Then – shortly after 12 a.m. ET – Tay went silent.

All of the inappropriate tweets have since been deleted.

While many found the tweet storm entertaining, Microsoft is now facing growing criticism for not putting filters in place to ensure the bot wouldn’t tweet offensive content.

Story continues below advertisement

Zoe Quinn, a gaming developer who was the target of online harassment during the Gamer-gate scandal, criticized Microsoft for not developing the AI tool enough before releasing it to the public.

Quinn also shared a screenshot of Tay calling her a whore.

In a statement to Business Insider, a Microsoft spokesperson acknowledged the inappropriate tweets and confirmed that Tay had been taken offline.

“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay,” read the statement.

Sponsored content

AdChoices