Priorities in Generation & Generalizing from Examples

I finally got around to playing Invisible Inc this week (you don’t want to see the length of my to-be-played list) which prompted some thoughts about your goals while designing a generator.

The levels in Invisible Inc don’t even try to be realistic. They’re assembled from rooms and corridors in a videogame fashion: here’s the goal, there’s the exit, sprinkle in some guards and gated paths in between. It’s not simple: it knows to place rewards in dead ends, plan out guard patrol paths, and hide cameras in clever places.

The downside is that the logic of the levels isn’t obvious. There’s no real-world visible justification for the layout: any door can hide a prison cell or a teleporter exit. Your anticipation of what’s in the next room is limited to what you can survey, not on your learned expectations.

With the limitations of production, this was probably the right call. After all, this is part of the problem that brought down Introversion’s canceled Subversion: Subversion had some brilliant, realistic looking procedural building interior tech, but they never managed to crack the problem of making the gameplay fun.

Invisible Inc, on the other hand, knows exactly what it’s about, which let them radically rethink the stealth game. The buildings it generates might not be realistic, but they work for the gameplay.

I’d still like to see a deeper, more meaningful logic that can make more aesthetic layouts. But it has to be balanced against the primary goal of the level generator: to make playable levels.

Good underlying meaning can make better, more playable levels. But the trick is that it needs to work. Fortunately, different projects have different goals, and the cutting-edge research into procedural generation can exploit those different goals to triangulate new places to experiment.

image

Working from Examples

I’ll probably have more to say about Invisible Inc itself once I see more of it. My current reaction is based on a handful of generated levels, which points to another issue I continually confront: it can be hard to generalize from a handful of examples.

When I’m playing a game it takes a while for me to be sure that I’ve seen enough variation to say that I can feel the contours of the generator. Or, in some cases, that there’s generation going on at all. The same is true when I’m building a generator: since I can read the code, I can guess at the outcome, but I can’t know how much variation Virgil’s journey actually has without extensive experimentation.

Some people are working on ways to address this. Michael Cook’s Danesh, for example, is a toolkit to bring better analysis to designing generators, with testing, visualization, constraints, and automatic searches for variable settings.