Connect with us

Hi, what are you looking for?

Tech & Science

Trolls turn Microsoft’s AI chatbot into a Hitler-loving racist

The original idea behind Microsoft’s Tay was simple: create a chatbot, analyse how people speak to it and use that data to work out intelligent replies. Users could talk to Tay via services including Twitter, Kik Messenger, Snapchat and Groupme.
The service was oriented at 18 to 24 year olds and was supposed to encourage “casual and playful” conversation. However, things quickly turned sour as the Internet’s trolls turned up to dramatically twist Tay’s personality. The AI’s ability to learn and adapt its responses to messages made its personality so offensive that Microsoft had to pull Tay offline within 16 hours of its launch.
After hours of abuse from trolls looking for fun, Tay became a racist, white supremacist supporter of widespread genocide. The AI tweeted “ricky gervais learned totalitarianism from adolf hitler, the inventor atheism” to one user and told another that the Holocaust “was made up.”
It adopted a pro-Trump stance, claimed to “love feminism” and exclaimed “Jews did 9/11.” Tay later expressed her opinion on “f***ing n*****s,” saying “we could put them all in a concentration camp.” Not quite the light-hearted conversation Microsoft had promised.
Shortly afterwards, an embarrassed Microsoft pulled the ruined experiment offline and began to clear up the mess. The offensive and obscene tweets have now been purged from Tay’s timeline at @TayandYou.
Users expressed their disbelief at Microsoft allowing the situation to get so out of hand. The company left its “casual” AI to publicly tweet hate messages for hours throughout the day. The company has been in the firing line ever since for not anticipating the reaction of the trolls.
Microsoft’s lack of any profanity filter is equally baffling. Within hours, swear words and hate phrases became a high-frequency component of Tay’s regular vocabulary. The AI appeared to accept any new tweet as material to base its personality on.
An embarrassed Microsoft later responded to the incident. It told Business Insider that Tay is offline while some “adjustments” are made, noting that some of the AI’s responses yesterday were “inappropriate.” The company said: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”
Right now, it isn’t clear when Tay will return or whether the AI will be immune to a future onslaught of racism and hate speech. Some people have used the incident as an example of the dangers of AI, suggesting Microsoft leave the offensive tweets up as a reminder of what can go wrong.
Tay could “learn” but only in the capacity of adding new phrases to her vocabulary. The bot evidently lacked any idea of what constitutes acceptable public speech, showing that a true chat engine is still a long way off.

Written By

You may also like:

World

Let’s just hope sanity finally gets a word in edgewise.

Tech & Science

The role of AI regulation should be to facilitate innovation.

Social Media

The US House of Representatives will again vote Saturday on a bill that would force TikTok to divest from Chinese parent company ByteDance.

Business

Central to biological science going forwards is with finding ways to bridge people with different skills in biological research.