Conceptual art has a deep affinity with procedural generation. Emily Short has referred to this interview with Kenneth Goldsmith as “an above-average explanation of what’s interesting about procedural writing in its own right,” an estimation that I definitely endorse.

Looking at procedural generation as an artistic movement (or, rather, one of the artistic trends that is using procedural generation today) the point of procgen is the mirror of conceptual art. If conceptual art is about art where the ideas are more important than the execution, than procgen is about taking those ideas and making them manifest. 

While many procgen projects are absolutely about turning the output to useful and meaningful ends–testing data, flexibility, entire computer generated games–procedural generation doesn’t start there. It starts with ideas, in the exact same way that conceptual art does. Without Dada’s cutups, would we be able to watch the films we see today? Maybe. But I suspect we might not.

One of the things that I like about NaNoGenMo in particular is that many of the projects are about taking an absolutely bizarre idea–what if you had a book that consisted of only the violent parts of The Iliad? What if you took all the dialog out of Pride and Prejudice? What if Alice went to Treasure Island? Or was chasing after Moby Dick? What if you had a story about recursively polite questioning elves? What do 50,000 meows actually look like?–and make them into actual books that you can read.

Jorge Luis Borges used reviews of fictional books to explore ideas without having to write out an entire novel. Procedural generation lets me explore ideas in a similar fashion–what if the characters in the stories in the 1001 Nights told stories about characters who told stories about characters who told stories…recursively?–but in a way that lets you read the result. 

What if we built the Library of Babel?


No Such Thing as Writer’s Block from Frieze on Vimeo.








algopop:

Generating scenes of Friends with a Nueral Network by Andy Pandy

Andy has fed scripts for every scene of Friends into a Recurrent Neural Network and it learnt to generate new scenes. It has scripted some interesting events like Monica wishing the others ‘Happy Gandolf’ just after (All the dinner enters.)

Someone should take this further by using Sam Lavigne’s Videogrep to edit existing Friends footage into the scenes created by the RNN. Then I’ll post them on algopop ;) 

I’m very fond of the results that recurrent neural networks are producing. They have their limitations, especially since they have to learn what little they know about language from scratch. On the other hand, it’s a computer algorithm that can learn enough about language to write semi-intelligible results from scratch! And also Magic: the Gathering Cards!

I especially like the suggestion to tie it into other algorithms to take the results to the next level. Combining procedural generation algorithms with each other and with different kinds of input is a key way to increase the new-information ratio. A generator with high entropy and a high signal is the dream.

Roughly, the next information the generator produces should be unexpected but meaningful, which can be a tricky line to walk. One way to do it is to take advantage of the way we get pleasure from realizing that a plot was predictable in hindsight but surprising in the moment. But twists aren’t the only method: there are other ways to get the same newness-with-meaning.

This particular result, posted on twitter by @_Pandy uses a constraint of the generator to its advantage. RNNs work better when the text that it’s trained on follows the same general structure, so that it can focus on learning commonalities. It tends towards averages, so it doesn’t work as well when the average result falls between two stools. By choosing the scripts for a single television show, it gives the result context and keeps the training data focused.

The context of a known TV show makes it easier to imagine the nonsense scenes happening. The focused training data is going to learn what the characters on the show are named, and the formatting of the script, so it takes much less time to produce an intelligible result.

https://twitter.com/_Pandy/status/689209034143084547




An Analog History of Procedural Content Generation

Procedural generation didn’t start with computers. While computational algorithms have played a big part in today’s wide range of generative forms, many techniques can and have used analog means to generate their results.

In this paper, Gillian Smith looks at some historical examples, such as the modularity of dungeon geomorphs used by D&D, the way that sounds were designed to work in arbitrary orderings in the Milton Bradley-published Simon game, and use of algorithmic assembly in D&D encounter tablesModularity and directed randomness are themes that cross many different generative forms. 

Gillian discusses some of the motivations take prompted people to explore procedural generation in the first place: replay value, creative assistance, and exploring a new expressive medium. These are still common goals today, but the paper points out that there is room for new goals, and that there’s room to find new purposes for using procedural generation.

She also mentions some motivations that have fallen by the wayside: having the computer as an unbiased agent, and replacing humans. These have been pretty prominent in the rhetoric of past procedural generation, but I agree that they are somewhat misguided: as Ian Bogost argues, algorithms can be, and often are, biased, and require a lot of design work. The designer is now operating at a higher level of abstraction, but it still requires the human input to shape the meaning. 

Procedural generation doesn’t replace the painter. It’s more like giving the painter a paint gun.

http://sokath.com/main/files/1/smith-fdg15.pdf



It’s @erat_viator, the bot that shares some of the same ideas behind my NaNoGenMo novel

I should do a more complete writeup, but the basic idea is that it looks at the network data from ORBIS, plots a route across the Roman Empire, and uses the GPS coordinates to search out things that it encounters along the way. Right now it’s digging up a lot of coins, but there are literally dozens of databases that it could potentially use and I’d like to incorporate at least a few of them.

The bot started at Rome and recently returned there, only to leave on another journey. I’m not sure where it’ll head next. It hasn’t told me, and I haven’t asked it yet.







Library of Babel 3D

I kind of like that adaptations of Borges’ “Library of Babel” are becoming their own microgenre. The unique thing about Keiwan Donyagard’s version is that it acts as an interface to Jonathan Basile’s web version, meaning that not only is it a fairly faithful adaptation, down to the spatial configuration, but that it also includes the ability to search the library.

It misses a trick if you jump off the balcony, because the loading speed doesn’t let you fall indefinitely like in the story. Still, it’s otherwise radically faithful to both Borges’ story and the contemporary algorithmic implementation.

Putting yourself into a virtual recreation of a space from a story is a way to gain insights into the text that you might otherwise overlook. It’s not the same thing as a reading the story, just as following my bot that’s exploring the Roman Empire isn’t the same thing as travelling there myself, but it does give insights that we otherwise wouldn’t have spent time to contemplate.

Exploring Borges’ literary space in a first-person perspective made me realize something that I think is subtly embedded in Borges’ story and that is also highly relevant to procedural generation. Which is that you’ll never encounter a large open space in the Library of Babel.

Even with an infinite generator you can’t get out of it more than you put in it. Infinity doesn’t mean everything; or, rather, there are levels of infinity. There are an infinite number of integers, but that list doesn’t include any of the also infinite fractions.

The Library is infinite, but every room within it is hexagon. The inhabitants don’t even conceive of a differently sized space because it wouldn’t fit. Even in the story itself, how can a language of only 23 letters include a book entitled Axaxaxas mlö? Has Tlön colonized even this place?

Just because your procedural generator is infinite doesn’t mean that it will include all possible variations. There will be things outside its scope, rooms that it is unable to construct. Even if some of its outcomes are theoretically possible, your random number generator’s flaws may fail you.

But just because your generator only has a cardinality of Aleph-Null at best, that doesn’t mean that it is any less unimaginably vast. If you added all the grains of sand and the stars in the sky you would never equal the imperfect copies of just one of the books in the Library.

Combined with the insight that you can encode any given text via another text, and you can recreate the librarians’ despairing search for meaning and order

https://libraryofbabel.info/LoB3D.html

image


The nice, well-informed people who are already following the blog on Tumblr don’t need this, but there’s another way to get updated when there’s a new post. The site has an RSS feed, which you can subscribe to with your favorite RSS reader: http://procedural-generation.tumblr.com/rss

There’s also a Twitter account, which is mostly notifications of the posts but also includes some other stuff. You’ll occasionally see retweets of projects I’ll talk about later, for example: https://twitter.com/proc_gen

And you can always ask questions, though don’t expect an immediate response: Tumblr improved the interface, but it’s still hard to respond to them unless I’ve got response completely typed up.




par_shapes.h

One of the practical uses for procedural generation that we’ve discussed in the past is as test data. When you’re testing a system, you need to put it through its paces. Things that are reproducible and have as few outside dependencies as possible are very valuable for testing. Which means that generating the test data is the often best way to go.

That was certainly Philip Rideout’s goal in writing par_shapes.h, an MIT-licensed single-file library for procedurally generating test geometry. With everything in code, you can use it to test graphics APIs in isolation, using complex shapes without worrying about if your file loading code is contributing to the errors.

The library includes platonic solids, parametric shapes, rocks, l-systems, and trefoil knots, which is a nice selection of basic procedural geometry. Plus the code’s on github.




Embers (2012)

Speaking of procedural generation that doesn’t rely on infinite variation, the primary use of procgen in the demoscene is to save space.

The winner of the then-new 1 kilobyte category in Assembly Summer 2012, “Embers” fits into only 1024 bytes.

At that scale, the two programmers who created it basically had to use fractals, because they only had about 350 bytes to squeeze in the GLSL shader code for the visuals, once the framework code was in place. Fractals are common enough to be cliche in the demoscene, but they judged that the fairly-new mandelbox would be fresh enough to make an impact.

Slightly unusually, the demo is written for OSX, and it makes use of tricks like a custom, content-aware gzip compressor. The result is a compressed binary smaller than this blog post.

While demos could theoretically have infinite variation, that’s actually somewhat counter to the demoscene’s norms of reproducible technical virtuosity.Though there have been moves towards introducing interactivity (of which .kkrieger is probably the most well known outside the scene), variation is usually not a primary goal.

An interview with the programmers: http://www.creativeapplications.net/mac/embers-by-tda-1-kilobyte-to-rule-them-all/

(via https://www.youtube.com/watch?v=dgDlqC19kss




Procedural Generation and Film

When we talk about procedural generation, the most common application that comes to mind is games. But generative processes are just as deeply embedded in other commercial mediums, including film.

I’ve talked about how procedural generation is useful for things other than infinity. So here’s a practical, real-world example of something that couldn’t have been accomplished without procedural generation but doesn’t derive its value from infinite possible combinations. In fact, the final product was exactly one generated configuration.

You’ve probably seen this trailer, since 300 had some measure of pop culture impact. Particularly the “This is Sparta!” scene. For which Karl Stiefvater and Bret St. Clair created the pit entirely procedurally.

Why would they go to all that trouble when the scene was always going to look the same every time you watch the film? Particularly since the reason they were brought in to work on the shot was it was already behind schedule. So why make it procedural?

Because it gave them flexibility

A constant in VFX is that the client will request changes. Even though the pit was only going to be used for one scene, the pit went through many variations before the director signed off on the finished design. By generating the pit procedurally, the VFX technical directors could tweak individual aspects of its look, such as the thickness of the grout, without having to repaint the entire matte painting by hand.

Flexibility is critically important in commercial art production, since being able to make changes as the need arises saves time and money. Even in fine art, a tool with greater flexibility can help the artist realize their vision. Using procedural methods in an artistic pipeline gives it that flexibility.




Infinity and Procedural Generation

I was asked on Twitter if I thought that procedural generation sometimes gets a little too obsessed with infinite play space. (He points out that architects are trained to be aware that they can only construct one building.) To which my answer is: yes. Most definitely yes. 

Infinity is frequently brought up whenever anyone discusses procedural generation, sometimes to the point that it seems to be the definition. Infinity is bound up in ideas about replayability, open-worlds, and our expectations for what procedural generation can do. But I think that’s a misconception.

Infinity is seductive. Procedural generation is one of the few ways that we can build a practical infinite thing, and many developers have chased that white rabbit. Whether it’s out of a desire for replayability or a search for the transcendent, infinity is something that lures us deeper down the rabbit hole.

Infinity is deceptive. You can use procedural generation to create an infinite amount of content, but the player or the viewer will eventually find the patterns in your generator, After that, their minds collapse your infinite content into a symbol, letting them mentally abstract it away while they go look for other patterns. You can’t get infinite replayability just by generating new content, because replayability is about continued learning.

Infinity is trivial. Worse than an obvious pattern, sometimes the player won’t be able to perceive any patterns at all, and it’ll all be just so much chaotic white noise. The player’s mind dismisses noise quickly, especially if it doesn’t have any bearing on their interactivity. It’s really easy to hook up a noise generator to an output and watch it go, but the amount of effort you put in is unfortunately a reflection of what you’re likely to get out of it.

So I think that deliberately recognizing that, as Chris Welch said, “any one player will only ever see 6 of these” will help you, as a creator of a procedural generator, to focus your efforts on making those six results the most interesting and unique six results that a player will see.

Constraints are one of the most practical artistic principles that I know, and that applies to procedural generation as well. Deliberately defining boundaries to your output introduces order. The big difference between procedural content and human-authored content is that a human creator can afford to chose only the unique results that have meaning and larger associations. But on another level, all procedural generators are authored. You, as the creator of the generator, can tailor the output or push it to extremes.

NaNoGenMo has a goal of producing 50,000 word novels, which it borrows from NaNoWriMo. But the purpose it serves is very different. I thought, at first, that the word count didn’t matter that much, since it’s trivially easy to write 50,000 “meows”, or to run a generator for an indefinite period. 

But the uniqueness of NaNoGenMo is precisely that we’re trying to write full-length novels. And to do that, you need to find a way to write make 50,000 interesting words. A subtle but critical difference. It’s why I’m excited by the results of the long-form plot generation experiments from this last November.


image

Which is not to say that infinity is useless. As I said before, the whole point of Borges “Library of Babel” is to contemplate infinity, and a generator that is deliberately invoking infinity is one of the few ways I know to create a feeling of transcendence and eternal possibility.

Infinity within another, constrained system is also useful. Most of the major Usenet roguelikes only generate a limited number of levels per game. This promotes tactical play. But some, like ADOM and Dungeon Crawl, also include an optional infinite dungeon. In both cases, the infinite dungeon offers possibilities (because you can encounter a wide variety of things that you might not otherwise come across) but also risks. It’s hard to retreat from them, so it’s easy to get in over your head, and thus exploring the infinite depths is balanced with a push-your-luck mechanism. Instead of a hard limit on infinity, or a loose limit of the player’s boredom, they instead use the soft limit of the character’s resources and abilities.

Tomorrow I’ll talk about a procedural generation project that was created to make a single, unchanging output, and how it was crucial to the success of something that you’ve probably heard of.