The Exhibition of Babel

Meandering through the ProcJam submissions, this one by Marcos Donnantuoni caught my eye. It is an infinite art exhibition of procedurally generated paintings.

I’m a known sucker for things inspired by “The Library of Babel”, and in this case I was also reminded of Strangethink’s virtual art galleries. The choice between human order and divine disorder also touches on a recurring theme around here.

https://marcosd.itch.io/the-exhibition-of-babel




X Degrees of Separation

Stepping away from ProcJam for a moment, here’s an entirely unrelated project: A team at the Google Cultural Institute’s Lab has built a machine-learning based approach to exploring image similarity. Using a large collection of art images, it attempts to construct a route between any two images:

What immediately strikes me is that these paths are both similar to what humans would construct while at the same time they are radically different.

That is, each step the path takes is immediately obvious to the observer. Each step from a dress to a vase makes sense; the transition is smooth.

At the same time, it makes connections that no human would have thought of, while ignoring the links that humans regard as important. It’s entirely based on visual similarity, so it ignores culture, history, and symbolism in its attempts to find common links.

Sometimes those connections show up anyway, where the visual similarity is too close to ignore. Sometimes the new connections it finds tell us something about the world that we wouldn’t have realized otherwise. But the iron rule of the machine’s metric stands apart from the human associations we’re used to.

It feels, in a sense, like asking an alien to be an art critic, an alien who seizes on a criteria and follows it rigidly. And in doing so, it sometimes exposes things that we didn’t realize about ourselves.

Though I’m not quite sure that’s fair to the programmers: they’re the ones who asked the machine to follow this rule, after all, and they’re (presumably) human. Like all algorithms, the result is based on what we put into it, combined with the Magician’s Apprentice effect of getting exactly what we asked for, no more and no less. The computer is an exactingly literal genie.

You can see the experiment in action here: https://artsexperiments.withgoogle.com/xdegrees







Proc Skater 2016

This is a ProcJam project that stands out to me for a couple of reasons.

First off, it’s one of the early projects to use WaveFunctionCollapse to build its skatepark map. The map is generated based on your skater’s name (which can also be randomly generated) so to get back to the same map you just need to enter the same name.

Secondly, while it uses Unity as an engine, it’s written in Clojure via the Arcadia plugin. I’ve got a long-standing interest Clojure and other Lisp dialects, partially because I find Lisp uniquely suited to programming generative things. Since Lisp doesn’t have a divide between code and data—code is data—it makes it comparatively easy to write code that generates code. Clojure, with its extensive support of immutable functional programing further lends itself to my preferred ways to make generators. Which is one of the reasons why I’ve written two novel generators in Clojure.

Of course, there’s not as much of a support ecosystem for Clojure as there is for, say, C#, which is why I’m happy to see projects like the Arcadia plugin.

Made by Joseph Parker and Ryan Jones, with gamepad support by Oscar Morante, Proc Skater 2016 is not perfect. The physics can sometimes be a bit glitchy:

image

Also, I’m bad at skateboarding. Though I did manage to pull off a few (procedurally named!) tricks:

image

ProcSkater2016 is available as pay-what-you-want from https://arcadia-clojure.itch.io/proc-skater-2016, with all proceeds being donated to the anti-hate crime Southern Poverty Law Center.







Acre 6

Now that ProcJam has ended, there are dozens and dozens of things that were made for it that I could talk about. I’m not going to get around to covering all of them: I could theoretically spend the next year alternating between ProcJam and NaNoGenMo projects. But I am going to highlight ones that catch my eye.

And today that’s Blendo Games’ Acre 6.

The first thing you’ll note is a mention in the readme of one of Brendon Chung‘s favorite games. That turns out to be the key to understanding Acre 6.

When viewed in this light, it’s immediately apparent why the game has the level of interactivity that it does. Like it’s fore-bearer, its an effective deconstruction of the RPG status quo. As you complete the quests and progress through the acts, you’ll swiftly realize the jest at the heart of the game.

I like how the vivid juxtapositions in the encounter and item descriptions are effective at painting miniatures: “Wyvern Hobbies”, “Flummoxed Annoyed Snow Yeti”, “Found Ling’s Amber Snow”, or “Baffled Uptight Sink Unicorn.” They’re cameos contrasted with the RPG oil paintings. They are, alas, of no more depth than that mere description: but that’s a problem much deeper systems also struggle with, and it certainly doesn’t detract from the framing here.

You can find the game here: https://blendogames.itch.io/acre6








Ocean Terrain Generator

ProcJam 2016 is over (though late submissions are always, always accepted). There are 164 entries listed at present, and one of them is mine.

The Ocean Terrain Generator is an outgrowth of one of my previous terrain-editing prototypes. The terrain and water is implemented as a stack of three layers of fluid simulations; each one can erode the layer beneath it and deposit it as sediment elsewhere. (In practice, I turned off the sand erosion for performance reasons; it wasn’t helping much.)

The sand uses the same fluid sim as the water, but with a high dampening factor. It’s not as realistic as it could be, but it works. The fluid sim itself uses a pipe-based flux grid to perform a shallow-water fluid simulation, loosely based on this paper, this presentation, and a bunch of other sources. It can, no doubt, be improved.

The production schedule was roughly: get the fluid simulation working; spend two days getting it to work on the GPU and then ripping it out again because of simulation stability issues; implement diamond square terrain generation; add erosion; make shaders and textures; add effects to visualize rain and evaporation; and finally: add flocking boid seagulls.

The initial terrain generation is pretty simple when compared to the fluid sim: just a combination of Perlin noise and the diamond square algorithm. Though that’s no reason to look down on it: finding the best settings reminded me of how many aesthetic choices go into making a good looking generator that makes sense.

One of my goals for this year was to have a polished presentation: I wanted there to be direct feedback for the underlying aspects of the simulation. Hence the rain clouds and water vapor. Each world you generate will have different settings for the rate of rainfall and evaporation. Having a visual representation of those settings makes the simulation much more accessible.

On that note, I think the most successful part of this generator is how the moving parts combine together. Because each major variable in the terrain generator can be treated as either an array of floating-point numbers or as a 2D texture, there are a lot of ways that the different systems can be plugged into each other. The connections I made only scratched the surface.

I can do a more technical write-up of the inner guts of the simulation if people are interested, but I’ve also got a ton of other people’s ProcJam projects to highlight and my NaNoGenMo novel generator to finish. November is a busy month!




Kindohm vs RITUALS - Enger Tower

(Warning: video has some intense flashing and flickering)

Everything in this video was generated with an algorithm. Dan Hett created the core footage in the Cyril livecoding environment via an adapted Tracery system. The machine then specified the edits for the final video.

I like seeing generative tools like Tracery used in ways that I never anticipated. This definitely counts.






Procedural Generation, and the problem of Player Perception

Dan Marshall posted an article today about players’ perception of the procedural generation in his game The Swindle, and I thought it was worth talking about because I think that the idea is widely applicable to procedural generation.

The basic idea is that people were complaining about an element of the procedural generation that they thought was broken, even though it was working exactly as intended.

This is a common problem with procedural generators: as designers, we have a pretty good idea of what the outcome is going to be. The player, though, has no idea what is going on under the hood. Is this pattern meaningful, or is it just noise? The Eliza effect, where player assumes that the computer has thought behind it, and is disappointed when they discover that it’s just an illusion, and the Tale-Spin effect, where there’s a complex process that the player never really sees in action, both lead to disappointment.

Now, sometimes you want things to be mysterious. To pick another game, Starseed Pilgrim wouldn’t be the same if it explained everything up front. But even here I think there are some tools we can use to make the generator more transparent without giving anything away:

  • If you have secrets on the map for the player to find, give hints that the secrets exist. Nethack uses the sounds you hear in the dungeon as clues. If you know what to watch for, you can guess if a level has a secret treasure vaults.
  • Predictability. But, you say, isn’t procedural generation supposed to be surprising? Ah, but we want surprise within limits. Remember, chaos is boring. Using a generator that is guaranteed to have a certain pattern can help the player learn enough about your generator to be genuinely surprised when the pattern changes in interesting ways.
  • Inform the player afterwards. Doom, while not generated, listed the percentage of secret rooms the player found. Which had the first effect of telling the player there were secret rooms to find. If you normalize the idea that the missing information is part of the game, you can inform the player what to look for without giving away any overt clues.
  • Taking that up another level, explain the reasoning behind the choices the generator made. Why did the bot use that color? Metaphor Magnet explains what associations are present in its paintings. You can do similar things for your generator. Why did it choose to put this thing close to that thing? If you can work out how to translate that into a way the audience can understand, it opens up a powerful new way for your generator to express itself.

I’m sure I haven’t exhausted the list. Are there any other ways you can think of for making generators more transparent?







Babelium

Babelium is a roguelike set in Borges’ Library of Babel.

It’s still under development, so this is just a beta version, but you can walk around, read the books, and explore the infinite variety of hexagons.

And the entire game screen is a valid page of a book from the Library of Babel. You could, in fact, play this game by flipping to the right page in the right book off the right shelf from the infinite library.

Play it here: http://100r.co/projects/babelium/index.html

The project site is here:http://wiki.xxiivv.com/Babelium






Dungeon Crawl Stone Soup v0.19 Tournament

It’s a really busy week for procedural generation around here. ProcJam is in full swing, NaNoGenMo is underway, and Dungeon Crawl Stone Soup just started their latest tournament. You can join in on any of the public Dungeon Crawl webservers.

I’ve praised the level generation in Dungeon Crawl before: the sheer amount of meaningful variation in the map generation is worth noting.

But that variation wouldn’t be meaningful without interactions that give it meaning; and so it’s the rest of the systems that I want to talk about right now. Unlike some other roguelikes that attempt to continually surprise you with their depth of interaction, Dungeon Crawl Stone Soup keeps things focused. That focus lets the developers put their efforts into the things that have the most impact, like variation between characters.

Playing Dungeon Crawl as a Minotaur Fighter surrounded by a continually-active elemental storm is very different from playing as a Summoner surrounded by Ice Beasts, and playing as an Octopode assassin is different again. Each interacts with the procedurally-generated dungeon environment in different ways.

The Minotaur makes a lot of loud noise and draws enemies towards him, so he tends to seek out narrow corridors so he can isolate how many enemies he’s dealing with at once…though wielding an axe that can hit multiple foes at once means the ideal combat position is at a door, with a corridor behind to retreat into. The Hill-Orc Summoner, on the other hand, is surrounded by a crowd of allies, so she prefers an open room where the superior numbers on her side can overwhelm individual enemies. And the Octopode is good at sneaking, so creeping quietly up on enemies before they notice makes creates tactical puzzles as you try to get closer.

This is a good principle for other kinds of interactive generative designs: the rest of the game should reinforce the generative parts and react to them.

In addition to the character differences, Crawl uses the dungeon branches to introduce new challenges. These side branches (which have entrances at predictable but not fixed locations) use different parameters for their generators, creating more extreme variations. Mixing in wildly different generator patterns is a highly effective way to add variation and push away from Kate Compton’s bowl of oatmeal problem: at the very least, you can have a bowl of oatmeal and a bowl of corn flakes.

The third feature I’d highlight for those of you looking for ideas for your own generators is that the loot generation in Dungeon Crawl is very streaky. One game might have dozens of scrolls on the first few levels and few potions, another might see a lot of weapons and few scrolls. Over the course of the whole game, these tend to even out, but by only giving the player access to a subset of the tools on each run, it makes each trip that much more unique.

Your generator doesn’t need to use it’s full range every time. In fact, it might be better if different runs have settings at different extremes.








Automatic Generation of Fantasy Role-playing Modules

Generated roleplaying scenarios have a long history, but this research by Daniel Ashlock and Cameron McGuinness aims to make complete coherent modules in the style of traditional D&D dungeons.

The evolutionary algorithm approach to dungeon generation is interesting; I don’t think I’ve seen that particular approach applied to level design before. It’s apparently an effective way to combine constraints with the generated result.

I’d like to know much more about the populating and description part of the algorithm. The results seem quite convincing, but it’s a bit light on the details. 

http://storage.kghost.de/cig_proc/full/paper_33.pdf