Was Microsoft’s Twitter Bot An Elaborate PR Stunt?

On Wednesday morning, Microsoft launched Tay, an AI Twitter bot intended to learn from and converse with users who interacted with it. Very quickly, it began tweeting comments that ranged from racist to sexist to just generally terrible. Within about 16 hours, Microsoft was forced to shut it down.

Just as quickly as Tay began spouting hate speech, media outlets (even the most respectable) began to bemoan the Microsoft Twitter bot experiment, claiming that it proved just how awful humans — or at least humans on the Internet — can be.

Tay Holocaust

An exchange with Tay from the night of March 23. (Warning: Before Microsoft began deleting Tay’s tweets, many users took and shared screenshots. However, most media outlets are advising readers that some of these screenshots may have been manipulated). Image Source: Gizmodo

And there is absolutely no question that the things Tay said were awful. But, don’t worry, none of that necessarily means that humanity is all that awful, because there are several key facts about this story that have gone unreported:

  • As you’d guess, just as quickly as Microsoft quickly began deleting offensive tweets, users began screenshotting them. And as you’d also guess (and even some of the outlets fanning this story’s flames are acknowledging), it’s now tough to tell how many of the Tay tweets you’ve seen were authentic and how many were photoshopped.
  • Tay’s tweets were not at all solely the organic result of things it had “learned” from users. Instead, Tay was equipped with a repeat-after-me function in which it could simply be made to parrot whatever a user instructed it to. It’s clear that a large number of Tay’s tweets came about this way. Yes, users still had to say these things in order for Tay to parrot them. But if you give 18 to 24-year-olds a big megaphone with no repercussions, it’s not a surprise this happened.

After one look at the way Microsoft packaged Tay, it’s not at all hard to see why users would make a joke out of the experiment. Tay’s Twitter bio begins, “The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill!” Elsewhere, Tay’s website outlines the “conversation hacks that can help you and Tay vibe.”

That reads exactly like what it is: a painfully clumsy corporate attempt to connect with a demographic it doesn’t fully understand. You can imagine a guy like former Microsoft CEO Steve Ballmer sitting down and trying to studiously recreate the ways in which he thinks millennials speak.

The result instead looks like a joke on the way millennials speak. So it’s surprising to see why Tay’s intended millennial users took the whole thing as a joke. Microsoft set the tone and users ran with it. Did they go too far? Probably. But is the Tay experiment closer to a dark joke than a depressing referendum on human nature? Yes.

And some are even suggesting that Microsoft was, to an extent, in on the joke. Nello Cristianini, a professor of artificial intelligence at Bristol University, theorized, in the Independent, that Tay may have been a PR stunt:

“You make a product, aimed at talking with just teenagers, and you even tell them that it will learn from them about the world. Have you ever seen what many teenagers teach to parrots? What do you expect? So this was an experiment after all, but about people, or even about the common sense of computer programmers.”


Next, discover why the world’s greatest minds think the artificial intelligence threat is very, very real. Then, see how kids learn English through Twitter.

All That Is Interesting
Your curiosity knows no bounds. Neither do we.
Close Pop-in
Like All That Is Interesting

Get The Most Fascinating Content On The Web In Your Facebook & Twitter Feeds