Tay: Crowdsourcing a PR Nightmare
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.” [1]
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.” [1]
In March 2016, Microsoft launched Tay, a Twitter bot that could learn from its conversations with others. The experiment quickly unraveled.
The proliferation of chatbots over the last two years has made life easier for some people. But are these bots really learning from their crowdsourced data inputs to make the outcome better for all users?