GenerateMe

I like it when people write detailed tutorials about their methods or work process. Breakdowns of specific techniques are very useful if you want to re-purpose them to your own ends, and when I read about the minutia of someone else’s process, I often take away some new ideas that they didn’t realize were different.

Which is why I was happy when I recently came across the GenerateMe blog and twitter feed.

Specifically this step-by-step description of what the author calls an “iris”; a weird little organic form generated via a particle system. It’s a great example of how the output of one odd process can be used as the input for a visualization. Plus it comes with the Processing code to implement it.

It also includes the formula…

…which is just delightful.




Games by Angelina

Of course, I’m not the first person to talk about procedurally generated gameplay. For example, Mike Cook’s ANGELINA project is an AI that designs games.

Here’s a video of one of ANGELINA’s newsgames, which shows off some of the early design efforts. And here’s one of a more recent games, made for the Ludum Dare jam. (You can play the game yourself on itch.io)

ANGELINA was, among other things, intended to explore higher-level creativity, looking at the problems that you have to deal with when combining systems. It designs a ruleset for the game and then builds levels based on that ruleset. The game-rule generation, inspired by earlier research by Julian Togelius and Jürgen Schmidhuber, evolves a rule grammar into new forms of gameplay.

image

It didn’t invent completely new genres: ANGELINA1 mostly did arcade games, while ANGELINA2 made metroidvanias. (You can read more details in this research paper on ANGELINA.) But it does demonstrate that the basic goal of generating new varieties of gameplay is possible.

And really, of all the kinds of procedural generation used in games, that’s what comes closest to extending the length. Infinite varieties of trees can keep things visually interesting, and indefinitely generated levels mean there’s more static content to consume, but the thing that keeps people playing is that there are new experiences.

Indeed, I suspect that one overlooked reason for the enduring popularity of roguelikes is that the most long-lasting ones have been under continuous development. Even roguelikes that go through long a hiatus between releases, such as NetHack, remained popular precisely because they had more depth than could be easily discovered–the Dev Team Thinks of Everything, after all.

http://www.gamesbyangelina.org/games/






RAISR - Image Upsampling via Machine Learning

Google has started rolling out a machine-learning-based upsampling technique. Instead of downloading a full-sized image, Google+ on Android devices will be able to download a small image and then scale it up on the device.

(I think Alex Champandard predicted this would be coming soon, but this is really soon.)

This obviously saves a lot of bandwidth: a 2x upsample gets you the same image with a quarter of the bandwidth.

This is pretty amazing, for a lot of reasons, and I expect that tech like this will be incorporated into a lot of future production techniques. Why download the entire thing when the upsampled version is 99% identical? Rendering larger images will mean rendering a half-sized version and scaling up. Once you have a real-time solution, expect 4K videogames to get a big boost in rendering speed.

Of course, I’ve been immersed in this stuff for a while now. (In the other window I have open, I have a bunch of images I’m using to train Neural Enhance on paintings.) So I feel obligated to point out that there will also be downsides: upsampled images, by definition, will have less information.

Since the whole idea is to throw away information that doesn’t matter to human perception, this won’t be an issue for the average photo. But if you are, say, looking for a source image for visual effects or scientific research, that will be a big problem. You probably shouldn’t store your original photos in this format, though you may not have a choice if your camera makes the decision for you.

And, of course, what is irrelevant data depends on the purpose of the photo: recall that Xerox scanners had a mode that altered numbers in scanned documents. The documents looked almost the same, since a 1 and a 2 don’t look all that different from a distance, but the meaning was vastly different. Image upsampling has similar pitfalls: since it’s based on estimating what data looks right, the result might erase differences that looked similar but meant something different.

Of course, this is just one more way for photographs to mislead us, along with framing, staging, and the whole panoply of ways that photographs fail to be objective. But this is a manipulation that is invisible to us, unthinking alterations tirelessly enacted by our machine servants. What you see may not be what was there.

Still, it’s an impressive achievement, one that just a few years ago would have seemed impossible.




Using Prefabs in Cogmind’s Map Generation

Josh Ge, the developer of the roguelike Cogmind (and X@Com) posted a writeup about the use of prefabs in Cogmind’s map generator, and I thought it would be a great springboard to talk about the use of prefabs in general.

I’ve mentioned them before in Dungeon Crawl, but ADOM, Angband, Spelunky, and other roguelikes also make heavy use of vaults/prefab elements. It’s one way to make stuff stand out from the oatmeal and introduce statistical spikes to break things up.

Josh gives us an in-depth look at how you can implement a generator that incorporates prefabs. It includes big level-defining anchors added at the start and room-defining presets applied after the basic layout is in place for more interesting encounters. Note that the prefab templates allow for a lot of variation (like Spelunky’s rooms) including rotating and mirroring.

Cogmind uses the prefabs to implement encounters and tactical challenges, which is a smart application of the technique. Using prefabs in this way can be thought of as a generalization of action set-pieces in more conventional level design. Rather than laying out the specifics of one unique encounter, you can describe the general parameters of the challenge and let the player encounter many variations. (Spelunky uses this to great effect.)

image

Prefabs are also a way to add landmarks. It’s difficult to make a map generator that has unique content that stands out enough to avoid being lost in the mush, but otherwise bland content makes for excellent connective tissue between the more unique landmarks that grab attention.

Another way to get unique landmarks is to have multiple generators, with a few being rarely invoked, as with biome generator. But prefabs are cheaper and can be combined with other techniques. Particularly if your prefabs can have other generators nested inside them.

The encounter system also touches on something else I’ve been thinking about lately: generating abstract systems. Games and simulations usually have a lot of invisible systems that aren’t directly reflected in the visible map space: tech trees, combat rock-paper-scissors relationships, economic systems, and so on. But, with a few exceptions, these are not things that procedural generation is commonly used to make.

But there’s no reason not to use procedural generation to make them. Indeed, many of the biggest procedural generation games have precisely the weakness that their generation is visible and physical, but that the gameplay systems aren’t subject to the same variation. Procedurally-generated gameplay is an under-explored area.

http://www.gridsagegames.com/blog/2017/01/map-prefabs-in-depth/








Evolve Me (ProcJam 2016) 

Evolutionary algorithms are a staple of AI-life; and here’s one by Petter Bergmar in Unity for ProcJam 2016. The interesting thing about this one is the barriers that the creatures are initially incapable of jumping over. Many AI-life sims are strictly 2D, so the specific niche that these creatures evolve to fill is pretty unique.

https://petterbergmar.itch.io/evolve-me








Civilization II

I found my old Civilization discs over the holidays, so what better time to discuss the quirks of the Civilization II map generator?

Civ2 was the gold-standard Civilization-style game in the late 90s. With the collapse of MicroProse it looked like the end of the series, and most other civ-style games at the time were clearly direct reactions to it.

And, like the first Civilization, it had a map generator.

Not a terribly realistic generator, as you can see from the pictures above, but one that looked close enough to work for gameplay.

The random seeds for the special resources and bonus huts followed a predicable pattern, with resources being a knight’s-move away from a central square. An example:

There were 64 different arrangements of this resource pattern, with slightly different offsets based on the seed. This was partially obscured by the different terrain types, so it took a little while to catch on. (The huts followed a similar but different pattern.) There was enough interest in the game that the details were reverse-engineered and custom map generators were created.

This pattern fits right in with the radius of tiles that a city could use, so I presume that it was deliberate. However, none of the later Civilization games had similar quirks in their generators. Probably because having such a dominant city location ultimately works against the hard-choices gameplay. Alpha Centauri corrected this in spades, but that’s a topic for another time.

Civ2 also has maps that are way too big for the amount of interaction you have. Late game, you’re inevitably going to end up with a lot of empty map or way too many tiny cities to micromanage. (Of course, I’ve played what I’d guess to be thousands of hours of Civ2, so this drawback only goes so far.)

I find that when I think of Civ2 maps (versus Civ4 maps) I remember the individual tiles much less and the shape of the continents much more.

The map editor helped a bit with that, I suppose; one of the games that sticks in my mind is the early game I played on a very narrow but very tall cylinder, where I had the south pole to myself thanks to a ring content I drew in the map editor. But there was also the high-difficulty game on a completely random map where I got stuck at the north pole and had a difficult time moving down to the warmer continents. I couldn’t begin to remember what the individual tiles looked like, but I could probably sketch a vague map of the land-forms.




Music video generated by a neural net

Mario Klingemann fed a song into an image-generation neural network, and produced this music video.

I’ll just quote Mario’s explanation for how it works:

This consists of several components: the images are generated by a neural network that is given a 4096 dimensional feature vector. What happens is that you give the NN 4096 numbers and it produces a 256x256 image out of those. In theory this network could produce any possible image, but in reality most vectors that you feed in produce garbage or one particular image (I call it the “fox with eye shadow” - sometimes you see it pop up because I didn’t really clean up my data). So the first task is to find feature vectors in this huge space of possibilities that produce images that look interesting or even better, like something that humans would recognize as a certain object. In order to do that as second neural network is used to classify whatever the generator has produced. Using gradient descent the algorithm tries to tweak the feature vector so that the resulting image looks more and more like a certain category. For this clip I was more interested in abstract looking images so what I did is to stop the gradient descent before it got too concrete and save the feature vectors.

After I had about 1000 different vectors I move on to step 2 which is making the music video. The idea here is that I want similar sounding samples to produce similar images. So what I did is to sample a song, transform it into frequency bands using FFT and then cluster the short snippets into 100 clusters using k-means. When I now playback a song it will use the learned k-means to give me a number between 0-100 for a certain frequency pattern. Surprisingly this works even for songs that are totally different that the one I trained the k-means on. That new number I get every frame becomes the index of my pre-calculated feature vectors which you can also see as a coordinate in 4096-dimensional space. That coordinate becomes the current target for my “playback particle” which tries to get from its current position in 4096-dim space to the new target. It uses a kind of gravity/spring physics to get there - or you could also see it like a mouse-follower script, so there is a bit of inertia in order to get those morphing transitions. Because that is the fascinating part of the latent space: you can interpolate between two feature vectors and will get a weird-smooth transition between the two images.

Note that last point: the multi-dimentional space lets you smoothly interpolate between images, giving you flexibility. The whole thing is also designed around giving order to chaotic images, which shouldn’t be a surprise with this artist.

Be warned that the videos have some rapid flashing (I slowed the GIF above down a bit).

An experimental music video clip generated by a neural network
Another music video generated by a neural network
A generated music video based on extracted poses from still images






Erosion Sim (in browser on GPU)

As you might imagine given some of my recent projects, this erosion sim by Ricky Reusser caught my eye. And not just because my own GPU code has an instability I haven’t tracked down yet.

Erosion is something that doesn’t get included in many past real-time simulations, partially because the results depend on the neighboring values, making the whole thing much more expensive than a naive deterministic approach would be. It’s starting to become a bit more common, now that you can use a GPU, but it’s still fairly rare.

As an effect, though, erosion can really help sell a terrain. First off, it simulates a history for the terrain. It’s a lot harder to do that with just Perlin noise. Having what is, in a sense, a real history adds information and gives a sense of weight and meaning to the shapes. You can infer things about the shape of the whole by just looking at the small, local part where you’re standing.

Second, it breaks up the terrain in interesting ways. By this point I’m pretty used to seeing Perlin Noise in all of its iterations. Using a process that operates by drastically different assumptions throws that off: shapes that might have been humdrum are modified in interesting ways.

There’s a few ways to fake erosion that are a lot faster than simulating it outright. That still gets you all the second benefit, and may manage to imply something about the history as well. (And it might also be easier to combine with additional layers of input.)

http://rickyreusser.com/demos/regl-sketches/005/






endless screaming

Aaaaaaahhhhhhhhh ah a ahhhhh ahh ah aaaahhhh, ahh aaah ahhh aaaaaahhhhhhhhh.







Doguelike

From ProcJam 2016 comes this twist on roguelike dungeon crawling and Progress Quest.

Doguelike, by Jérôme and LucieRoyGraph, is an idle game–that is, it mostly plays itself. Unlike Progress Quest, but like a couple of other roguelikes, you do have a bit of input into what happens, via choosing which stats to level up.

Also unlike Progress Quest, the doge is exploring a complete generated dungeon, simulated in the usual detail. Which makes it much more interesting to watch in the moment-to-moment space.

I’m pretty terrible at keeping this shiba inu alive for too long, but maybe you’ll have better luck.

https://callmemonamiral.itch.io/doguelike