Darkness Lay Your Eyes Upon Me (by Conspiracy)

64k is not a lot of space for information. Granted, the demoscene has been doing a lot with a little for a long time. But the curious thing about information is that there are mathematical and physical limits to the degree to which information can be compressed. 

The demo above, which took 2nd place at Revision 2016, is made up of  65,536 bytes. Within that limit, the team (who go by the collective name of Conspiracy) managed to convey a cinematic experience with a narrative bent by using just a few thousand numbers.

Narrative and cinematography are themselves a kind of information compression, a way of packing more meaning into symbols and images than their pixel values would suggest. The formal study of art and art history is largely a process of learning to decode the conversations that the artists were speaking to with their work, and understanding the abstract systems that drove those conversations in turn.

Both narrative and cinematography are forms of order. Technological sophistication on its own is seldom enough to create the kinds of order that hold our attention for long, and the best tech solutions often operate by emergently invoking that narrative sense. That’s one reason why I think it’s a mistake to view story or visuals as something separate from gameplay or systems: they’re all part of the same thing, multiplied together to create the meaning from the machine.

Generating something that has a story attached, or that conveys meaning through composition or montage, can often create a better generator without needing more sophisticated technology. While I enthusiastically embrace the creation of new algorithms, I am even more interested by the designers who take existing technology and repurpose it to create new meanings and new stories.

(via https://www.youtube.com/watch?v=x_izwOdGFlk)




Automated Fairy Tales

Would you like to read a fairytale?

What about a blog of automatically generated fairytales, based on Vladimir Propp’s classifications of folktale morphology?

Michael Paulukonis took his 2014 NaNoGenMo project, the Poorly Applied and Misunderstood Proppian Narratological Generator and hooked it up to a Tumblr blog. 

The result recognizably follows the form of the folktales—and some of the turns of language—though reading them in bulk quickly reveals the seams.

The github repository for the original project is of interest to anyone wanting to make a fairytale generator, because Michael tends to thoroughly document his influence and prior sources, and this project is no exception. There’s links to code resources, indexes of motifs, online fairytales, research about semantic networks, previous fairytale generators, and links to further research.

Michael remains interested in fairytales and generation, as his 2015 project The Programmer Who Had No Heart in His Body demonstrates.

http://fairytalesbot.tumblr.com/



I wasn’t planning on saying anything else about Tay, but some of the aftermath bears mentioning.

First, Microsoft Research has released a statement. It includes the rather interesting detail that they have an earlier XiaoIce chatbot in China. They’re light on the implementation details, so we still don’t know how Tay works under the hood, but they blame a coordinated attack on a vulnerability in Tay as causing the problems. It’s not clear if they just mean the repeat function–which was obviously abusable–or some other functionality.

Unlike a lot of other bots, Tay made the national news. The Washington Post’s overview is a fairly good high-level look at the recent events. I note that at one point they describe the bot as being confused because it responds to similar input with completely opposite sentiments. This is tricky, because anthropomorphizing a bot can obscure how it actually functions. We don’t know if the bot has any understanding of the concepts its words signify, and we are almost certainly seeing more patterns in the output than actually exist.

Indeed, several people have suggested that one of the problems is that Tay was being presented as an anthropomorphic entity at all. After all, even adult humans can have trouble navigating online interactions, particularly when intentional abuse is involved. A conversational interface creates expectations that the computer can’t always live up to.  As linked to last time, Alexis Lloyd believes that conversational interfaces are a transitional phase.

Russell Thomas, a Computational Social Scientist, has his own estimation of what went wrong. Historically, AIML chatbot code included a repeat feature, and his contention is that the Tay bot didn’t include much of what he considers AI to be: he suggests that it was mostly using search engine algorithms, rather than any kind of concept modeling or natural language processing. 

Search engines can be sophisticated algorithms, but they don’t encode the contextual understanding we’d expect in a conversation. Worse, the marketing for the bot didn’t line up with either its capabilities or its target audience.

As Allison Parrish reminds us, A.I. is still “a computer program someone wrote”. While it’s fun to dream about the future possibilities of an artificial intelligence that treats us like people, the existing state of the art has the same relationship with us as other forms of software. 

How much blame should Microsoft Research shoulder for this? Is it possible to make a bot that’s completely foolproof? Their algorithm was vulnerable to the abuse, but as Alex J. Champandard points out, the actual abuse was caused by a coordinated attack intended to exploit the bot. Will expectations of foolproof bots hurt AI research? In response, Joanna Bryson posits that “Bot authoring is not tool building” and draws a distinction between moral agency and general agency (and Alex responds in the comments). 

In general, I agree that it’s laudable that Microsoft Research has stepped up to take responsibility for not predicting this as a possible outcome. After all, it’s not the first time that this has happened: Antony Gravin shared a story about the time one of his bots turned racist.

The botmaking community had intense discussions about the implications of the news. A number of botmakers were interviewed about “How Not To Make A Racist Bot”, including thricedotted, Rob Dubbin, and Parker Higgins. They talk about what could have been done differently, the problems they’ve had with their own bots, and some suggestions for making more ethical bots. Probably the most comprehensive response, from people who have made bots themselves and dealt with some of these issues before.










Moon Hunters and Rhizome Stories

I’ve been following Moon Hunter’s development since I backed the Kickstarter: a co-op game that uses procedural generation is obviously right up my alley.

An aspect that stands out about the game is the way the game’s story can’t be completed in one playthrough. Each individual session is a short action-RPG story of a soon-to-be-mythological character. But the player can only experience a small subset of the possible encounters. Some don’t even unlock until later playthroughs, and you’ll often find hints for events you might encounter in future stories.

image

The way that the events and stories connect across multiple playthroughs means that the overall narrative of the game is a complex, interwoven structure, of the kind that Janet Murray called “rhizome fiction”. A rhizome is a complex root system; a philosophical concept; and a story without a definite beginning or ending, as pioneered in the postmodern hypertext community.

Moon Hunters is far from the only game that uses a rhizomic structure. Many games use hypertexual principles: in my opinion, Jesper Juul’s Closed progression systems are best understood as a subset of hypertext. While many games use maze structures with a definite flow for their overall narrative, the more amorphous rhizome structure also sees use: those open-world games with a ton of sidequests, for example. Or Her Story, which just picked up a bunch of IGF awards by cleverly adapting full-motion-video to a rhizomatic narrative. Or Sam Barlow’s Aisle, an interactive fiction game where you only make one move. (Try it out, it’s a pretty clear example of what rhizome fiction can be like.)

image

What makes Moon Hunters interesting as an action-RPG is that while each individual play session follows Murray’s labyrinth pattern with strongly defined milestones, the constellation system creates a rhizomic system that serves as an overarching narrative. Since the entire game is framed as a ritualistic mythological enactment, any inconsistencies are subsumed into the myth-making.

Rhizome fiction is particularly useful for procedural generation because it relies on the player to draw the connections between ideas. In the same way that Nick Montfort’s 1K story generators use elision to imply a story, the individual episodes in Moon Hunters don’t need to fit within a single plot. This saves the trouble of writing a plot generator, which is not easy. But it also means that the actual structure that’s used to generate the stories is free to use a non-narrative system, meaning that it can be more closely integrated into the action-play or can be structured to follow the pacing of the play session.

image


I was trying to catch up with projects that were released while I was at GDC, but then Microsoft went and unleashed their ill-considered Tay AI on Twitter, apparently with zero filters and a functionality to repeat what other people tell it to say.

This was especially egregious because the botmaking community has put a lot of thought into ways to avoid similar problems. Darius Kazemi’s twitter bot etiquette (and transphobic joke detection and wordfilter) and Leonard Richardson’s “Bots Should Punch Up” are good starting points.

On the Microsoft bot in particular, caroline sinders wrote an essay explaining why it is an example of bad design and thricedotted has a more insider look at bot algorithms. Alexis Lloyd talks about the politeness of machines, how bots are likely to fail to live up to our social expectations, and asks us to stop trying to make bots act like people.

And as thricedotted touches on, algorithms aren’t neutral: they can easily incorporate many biases, either accidentally or as part of the unexamined assumptions of the creators. Many computer forms that ask for a name are unable to deal with spaces, apostrophes, or capitalization, all of which feature in common English-language names, let alone those from other cultures. A bug in a jury-summons system resulted in only part of a list sorted by zip-codes being used, in a county where the higher zip-codes had a higher proportion of African-American residents. 

In making things that make things, we can’t always control all of the details of the things we make. But we should take the time to think about potential problems and test for possible ways that our code might hurt someone, so we can try to anticipate these issues before we release them into the wild.




Michael Cook: Express Yourself

I’m back from a visit to San Francisco for GDC, where I met a lot of people and had many conversations about procedural generation. Including getting a look behind the scenes of the mythology generation in Moon Hunters and Dwarf Fortress. (Seeing Dwarf Fortress mythologies generated live is something special.) 

But that also means that a ton of procedural generation news has happened that we need to catch up on, starting with a post from Michael Cook’s revived Saturday Papers series, where he takes a close look at a research paper. In this case, it’s “Analyzing the Expressive Range of a Level Generator” by Gillian Smith and Jim Whitehead.

The expressive range of a generator, a term coined in the paper, is about measuring the “style and range” of the generator. Being able to quantify the variety of levels that a generator can create makes it much easier to evaluate the potential output. Instead of looking at a handful of possibly atypical results, we can look at the statistics from thousands of generator runs.

Of course, part of the trick here is to make sure you’re measuring the right aspect of the generator output. There’s some ongoing research in that area as well.

If you’d like to know more about expressive range, both Michael’s blogpost and the original paper are well worth your time.

The Saturday Papers: Express Yourself
http://www.gamesbyangelina.org/2016/03/the-saturday-papers-express-yourself/

Analyzing the Expressive Range of a Level Generator: 
https://pdfs.semanticscholar.org/09f8/fe6a89b5f5a480ab059f60a251052a31e2ed.pdf




Ben Porter, who has previously been mentioned and is currently working on a procedurally generated adventure game, has recently gotten his 10,000th follower on Twitter. As a present, he made a pet for each one of them.

They were well received. Some people have drawn fanart of their new pets, or put them into their games. I’m very fond of mine (pictured above).




Coronoid - Still

This demo took first place at the pc demo compo at NVScene 2015.

I like the demoscene for a lot of reasons, even if I don’t follow it super closely. It has created an outlet for computational artistic expression that isn’t constrained by gameplay or commercial viability, but nevertheless has its own form of constraints to spur new creative heights.

(via https://www.youtube.com/watch?v=7JV8b9r6r3k)




Stickup (NSFW)

A “code-based generative lyrics video” by Raven Kwok for the track “Stickup” by Karma Fields & MORTEN featuring Juliette Lewis.

Not safe for work, unless your workplace doesn’t mind swearing.




In 2011, demoscene artist viznut discovered that he could pipe the output of very short computer programs into audio output.

It spawned a genre that got labeled bytebeat, a form of glitch music. There’s a bunch of tools to make experimenting with composing these easier, including Glitch Machine, Bytebeat, libglitch, Html5 Bytebeat, or this online tool.

Viznut has posted a couple of blog posts with some analysis of the appeal of the genre.

(via https://www.youtube.com/watch?v=tCRPUv8V22o)