There are a lot of Twitter and Tumblr accounts that are mostly forms of curation: finding a specific kind of content and introducing it to a new audience or giving it context. (This blog falls in that category.) But curation bots take it a step further and automate the process.

Created by Alfonz M, The LDJAM Bot grabs a random game from the database of Ludum Dare games and posts about it.

This is a great way to spread attention to games that might have been overlooking in the flood of games that comes from each jam. 

The internet created a cultural shift from gatekeepers deciding what content could afford to be published to a democratization of publication, but it brought with it a whole new discovery problem, where there simply too much to discover everything and the new struggle is in the finding. Instead of taste-anticipating algorithms like Amazon or Google use, bots like these are taste-expanding, exposing people to new things that a human curator might not have discovered on their own.

https://twitter.com/ldjambot




Procedural Doesn’t Mean Random (2015)

A PROCJAM talk, centering around an important point: interesting procedurally generated content isn’t random, though it makes use of randomness. Lots of talk about No Man’s Sky and some of the philosophies of creating procedurally generated content.

You’ll want to turn up your volume for the talk (and turn it down for the trailer!)








Colt55 (2015)

One of the entries for the first PROCJAM, Colt55 is an infinite wandering game. by owendeery. With horses. And weirdness.

http://owendeery.itch.io/colt55






Rescue: The Beagles (2008)

In 2008, the game development forum TIGSource had a procedural generation contest. Several interesting projects came out of that, but the winner was Rescue: The Beagles, a game by Nenad Jalsovec about rescuing beagles from animal testing.

It uses its height map generation and object placement to cleverly create lots of little action movement puzzles. While it’s gameplay language draws from the long history of arcade games, it creates its tight action spaces on the fly. The impressive thing here is not just that the levels are procedurally generated, its that they’re well designed levels. This flow grows out of the cooperation between the rules of the game and the construction of the level generator.

http://www.16x16.org/category/rescue-the-beagles/







Terminal Time (1999)

Terminal Time is a documentary by Michael Mateas, Steffi Domike, and Paul Vanouse that is assembled in real-time based on audience feedback during the screening. This is more than just branching: behind the scenes there’s a complex system that takes the raw footage and assembles it into a coherent story that reflects the biases of the audience.

The original paper that describes it has examples of its inner workings, including the code for some of the events. 

From my point of view, one of the more important insights from Terminal Time is the search for a balance that is neither mostly hand-authored, nor completely emergent, instead seeking a middle ground:

Much of the architectural work that went into the iterative prototyping of Terminal Time was a search for an architecture providing authorial “hooks” on the right level of abstraction: low-level enough to allow significant combinatorial possibilities and the capability for surprise, yet high-level enough to allow the exertion of authorial control over multiple levels of the story construction process.

I think that following this principle is key in creating procedurally generated experiences that are dynamic but still carry meaning. You could generate everything with an emergent system (and those are often interesting experiments) but art takes focus. It’s often as much about what you leave out as it is about what you put in. 

The creators mostly seem to describe Terminal Time as an AI project, rather than calling it procedural generation. But I think it fits comfortably under both umbrellas.

http://www.terminaltime.com/




Procedural generation is not just for games. There’s a whole world out there to explore, and I don’t mean a terrain generator. One application is Twitter Bots: software that automatically posts content to twitter. 

Some, like @everyword or @IAM_SHAKESPEARE are simple recontextualizers, linearly reposting a corpus. There’s some clever results out there, but it’s fairly easy to grasp how they work under the hood. The interest comes from the bite-sized juxtaposition, not the mechanics of the shuffling.

Others are more ambitious, using everything from Markov chains to syntactic analysis to image manipulation to procedurally generate brand-new content automatically. Some just generate one type of thing repeatedly, while others take advantage of their platform and react to other people’s tweets or start conversations.







Sunrose (2014)

A delightful little procedural generation toy: type in seven letters, and it generates a corresponding sunrose. The visuals are lovely, but what really sells it is the way the audio responds to the player’s interactions.

http://tak.itch.io/sunrose




Say that you have a city builder, and you want to generate some building between your nicely curving roads. Or you are generating your cyberpunk city, and you put in a lot of work to avoid it being a boring grid. Or you’re doing urban planning for a real city, and you need to drop in some building so you can see what it will look like when it’s actually built. But the places to fit buildings are all kinds of weird shapes: how do you figure out the shape the buildings should be? And if you make a small tweak to the roads, how do you keep the parcel subdivision persistent so the new version isn’t too different from the old one?

Enter Procedural Generation of Parcels in Urban Modeling, a paper by Carlos A. Vanegas, Tom Kelly, Basil Weber, Jan Halatsch, Daniel G. Aliaga, and Pascal Müller. In it, they discuss a method to take a city block that can be an arbitrary shape and divide it into parcels to place buildings on, one robust enough to handle live editing.







Warning Forever (2003)

A vertical-scrolling shoot-em-up by Hikoza T Ohkubo that consists entirely of boss battles against opponents that are procedurally generated to counter the weaknesses of your fighting style. As you beat each stage, the next boss gets generated with new abilities based on how you destroyed the last one. 

This kind of adaptive play, with systems that react to the player’s actions and generate new content based on them is a powerful approach. It gives the player’s actions inherent meaning, demonstrating an active recognition of the choices that are made. 

This kind of intimate feedback is difficult to create by hand, because choices need to be anticipated by the designer to be recognized, resulting in a relatively limited set of verbs that the game recognizes. But if we create a procedural system that can communicate to the player in its own language, we can create a dialog between the player and the game.




You’d think that putting a road down would just be a matter of pathfinding, but a good road often needs to alter the terrain a bit. The road needs to be flat enough to drive on, sloped enough to handle the terrain, with bridges, tunnels, and excavation to carve into the sides of hills. Not to mention that the road shouldn’t be too sharply curved.

Here’s a paper from 2010 by Eric Galin, Adrien Peytavie, Nicolas Maréchal, and Eric Guérin talking about how they generated roads. They calculate a discrete shortest path, using segment path masks to takes care of the difference between discrete segments and a continuous path, plus a bunch of cost functions and calculations to hand curves and bridges.

Once they had a path, they used it to generate clothoid splines. Some bits were further segmented to mark bridges and tunnels, and then the roads themselves were blended with the terrain by removing the vegetation along the route, refining the mesh, and adapting it in the blend regions next to the road.

http://arches.liris.cnrs.fr/publications/EG2010.html