I was trying to catch up with projects that were released while I was at GDC, but then Microsoft went and unleashed their ill-considered Tay AI on Twitter, apparently with zero filters and a functionality to repeat what other people tell it to say.

This was especially egregious because the botmaking community has put a lot of thought into ways to avoid similar problems. Darius Kazemi’s twitter bot etiquette (and transphobic joke detection and wordfilter) and Leonard Richardson’s “Bots Should Punch Up” are good starting points.

On the Microsoft bot in particular, caroline sinders wrote an essay explaining why it is an example of bad design and thricedotted has a more insider look at bot algorithms. Alexis Lloyd talks about the politeness of machines, how bots are likely to fail to live up to our social expectations, and asks us to stop trying to make bots act like people.

And as thricedotted touches on, algorithms aren’t neutral: they can easily incorporate many biases, either accidentally or as part of the unexamined assumptions of the creators. Many computer forms that ask for a name are unable to deal with spaces, apostrophes, or capitalization, all of which feature in common English-language names, let alone those from other cultures. A bug in a jury-summons system resulted in only part of a list sorted by zip-codes being used, in a county where the higher zip-codes had a higher proportion of African-American residents. 

In making things that make things, we can’t always control all of the details of the things we make. But we should take the time to think about potential problems and test for possible ways that our code might hurt someone, so we can try to anticipate these issues before we release them into the wild.