Race the Sun

Race the Sun wouldn’t be Race the Sun without procedural generation.

Race the Sun is more or less an infinite runner, only with a 2D plane to maneuver between obstacles and avoid shadows. The twist is that everyone is racing in the same world, which is regenerated every day into a new configuration.

It’s not the only procgen game that’s embraced daily challenges and leaderboards on the same seeded generation (Spelunky and AudioSurf are two other examples). But Race the Sun is built around it. 

By giving you a reason to play the same level multiple times, you begin to learn it and develop strategies and relationships. Since the game is all about just-one-more-time runs, these quickly build up to seeing the incidental patterns that would easily be glossed over if every run was a new seed.

One way to make players have a relationship with the things that you generate is to let them have time to develop that relationship. Elite always started the player in the same system, which is why Lave, Leesti, Riedquat, Diso, and Zaonce have thirty years of resonance with players. 

Likewise, having the map in Race the Sun persist for a day gives it a sense of ephemerality–you’ll never see this particular configuration again–while also letting you develop a relationship with the emergent space. You’re not just recognizing the building block that places a mountain or tunnel, you’re also also developing a relationship with that particular mountain, and those points that you’ve almost worked out the optimal path to collect without crashing. 

Once the player has recognized the building blocks that you’re using to generate your world, it is very difficult to get them to look past that. Race the Sun shows us one way to do it, by persisting the emergent patterns long enough to form an emotional bond with them. 




TensorFlow for Poets

Along the same lines as Zero to Lasagne, only with a more humanities bent, TensorFlow for Poets is a guide to getting TensorFlow to run on OS X (with a screencast so you can follow along!) It’s written by Pete Warden, who helped build TensorFlow. Once you’ve got it set up, you might also want to check out the tutorials on the TensorFlow site.

Maybe this will help one of you get set up with experimenting with deep learning and exploring the artistic possibilities. I do think that they’re mostly unexplored, since relatively few people have the combination of skills, time, and artistic bent to explore them. Hopefully tutorials like this will help increase that population.

https://petewarden.com/2016/02/28/tensorflow-for-poets/














Neural-doodle and style transfer

I’m still playing with neural-doodle. It’s a bit slow, due to my insistence on using larger image sizes for nice results. I’d probably get further with smaller images iterating faster on the GPU before rendering a full resolution, but one of the things I’m currently experimenting with is finding which settings get the best final results.

Said results, so far, are a mix of astonishing and a noisy mess, depending on which style image I use and how it’s marked-up semantically. The algorithm is very good at understanding the edges of the branches and leaves, so marking too close to them can sometimes have a worse result.

The scale of the replacement image also matters. I’m using Gigapixel scans of public domain artwork for most of my style images, so I have the resolution to crop out a small section of the painting. This works better than trying to use the full-scale painting. I’m not sure yet if that is because the details are too small once I scale down the larger painting, or if it is because the subject of the cropped image has more similarities with the target content. The algorithm appears to be very sensitive to the scale of the features in the image; using a larger image produces a different result.

I think that of all of the artistic applications of neural networks, style transfer algorithms have the most potential for artists in the near term. Taking a photo or a sketch, applying a style image that you painted, and getting a usable result that can be put into whatever game, film, comic, or other project you’re working on will be a handy ability for some future artists. Whatever default settings it ships with will probably be overused and we’ll have a rash of van Gogh-styled photo filters, but for working artists it’ll probably take on the status of something like Z-Brush or Mudbox: a tool that you use to sculpt a result that would have taken you hundred of hours to do by hand.

It will, of course, also raise questions about copyright, intellectual property, and just who exactly is responsible for what part of the result, which is one reason why I’ve been using my own art and public domain paintings as the source images for my experiments. Having public domain data with clear licenses for this kind of stuff is going to be critically important for keeping the artistic possibilities of these kinds of tools open to everyone to experiment with. 








Tilemancer

Speaking of artistic centaurs and tools, here’s a generative tool you can use right now. Tilemancer is a node-based tile-generator for limited palette 2D tiling textures.

The UI has some counterintuitive bits (click and drag up and down to change numbers) but it’s a nifty little tool. This tutorial may help.

image
image

I’m sure you can think of some creative ways to use the tiles you make.

image
image

https://led.itch.io/tilemancer








neural-doodle and Image Analogies

As an artistic process, making these neural doodles feels very much like it falls somewhere between collage and photomontage. It reminds me strongly of John Heartfield’s work, though he was working in photomontages for explicitly political ends. The way that the final image is both a seamless result but simultaneously obviously constructed (at least to the artist who did the composition) feels similar to me.

For now, the direct doodling is mostly confined to the data from one source image, making complex re-compositions more difficult. I suspect that there are some clever artistic ways to use this limitation, though I’d just as soon see some bigger images or a better way to tile the algorithm.

The results appear quite polished and inventive, until you see them directly next to the original paintings. Then the derivatives become obvious, and unless the new composition has a strong motivation the new paintings suffer from the comparison. More source images might help. Or, possibly, just a better artistic vision for the result you want and a grasp of the technique to make the doodling do what you want.

In any case, I don’t think that painters need to worry that they’re going to be replaced—although I am tempted to try my hand at repainting that van Gogh from the new composition. But that’s another form of collage.

image

The way the neural-doodle works, to use this example based on one of Cézanne’s paintings, starts with me drawing a semantic map, which you can see in the upper-right corner there. I then drew the target semantic map, in the lower left-hand corner, to tell the algorithm what composition to aim for.

I then feed it into neural-doodle. It tries to match up the style to the new composition, using Semantic Style Transfer. As Alex argues in the paper, generating images from CNNs mostly ignores the semantic information they have collected during classification, resulting in matches between colors but not shapes or objects. Semantic Style Transfer rectifies that. For now, the semantic annotations are manually added by the user, but could conceivably be handled by an algorithm in the future.

The resulting image in the lower right-hand corner takes a bit of processing time and some careful judgement on my part as to how the semantic map should be drawn. I’m still learning the ways that the mapping can be used, and what kind of images it works best on. There’s quite a bit of potential here for artists to play with and experiment on.

I think the most in-demand skill is probably having a good eye for composition, together with a strong idea to focus the result. Without a unifying theme or focus, the generated result can’t help lacking the cohesion of the original. But with more sources of data mixed in and a strong idea to unify it, there’s potential for an artist of the future to find.

One thing I want to see is to have more artists feed their personal work into the algorithm and control it from both ends. What happens when you’ve created both the image and the output? What kind of source images should you draw to get the results you want to see?

Meanwhile, as we wait for the centaurs to overrun civilization, it seems like there’s room to explore the implications of the artistic movements this enables. 






Mythology and Procedural Generation

No one seemed to have a good write-up of the GDC session where Tarn Adams and Tanya Short talked about procedurally generated mythology. I was going to write it up myself, but it turns out there’s something that’s even better: the video has been posted for anyone to watch.

Featuring live generation of Moon Hunters levels and Dwarf Fortress’s upcoming creation mythologies, the talk gives a unique look under the hood. You will need a bit of background as to how Moon Hunters and Dwarf Fortress work to get the most out of it. But there aren’t a whole lot of talks like this, exploring the practical application of procedural generation in games from people who have designed them, with the results of the generator on display during the talk.

Tanya shows the level generation from Moon Hunters (including the Unity inspector for their generator and some of the prefabs). The macro-structure of the night sky constellations turned out to be key for tying the generated myths together. Past heroes get memorials in future playthroughs, letting your future characters react to them. (Moon Hunters is still doing content updates, so I’m excited to see how they develop it.)

Tarn, meanwhile, showed off the new myth generator and talked about how it ties into the overall Dwarf Fortress ecosystem. The goal for the new creation mythology generator is to create coherent fantasy settings for Dwarf Fortress. Right now, the worlds created in Dwarf Fortress mostly share the same fantasy setting; new elements can be added but there isn’t any structure to tie them together. 

With the myth generator, instead of a hodgepodge of fantasy elements that don’t have any relationship with each other, the idea is to create a story that ties together the elements that get used.

Something that he elaborated on a bit at the end that didn’t come out as much in the main talk is how the myth generator feeds directly into the mechanics of the rest of the game. For example, the cosmic egg can leave fragments that form continents and landforms on the map generator. Since Dwarf Fortress works by extreme emergence, feeding the structure of the creation myth into the rest of the systems should produce some very interesting results.




Live-coded Music with Haskell, recorded live

Music generation is one of the things that impresses me but I have little aptitude for: I’m very visual but don’t have the training or ear to do more than dabble with composing. I like listening to it, though, and I can certainly admire when someone does it well.

Mike Hodnick’s live-coding performance, recorded above, is one of those things I just watch with fascination. Composing as a performance via code is a tricky thing to pull off. In this case, I believe he was using Tidal, a Haskell-platform language for live coding.

(via https://www.youtube.com/watch?v=JOMslt17KvY)








From Zero to Lasagne

Perhaps you’re one of the many people who have seen the possibilities of deep learning and neural nets, like Deep Dream, StyleNet, or neural-doodle. But setting up all the software that you need to get it to run on your computer seems daunting.

Enter “From Zero to Lasagne”, a guide to setting up the Lasagne deep learning library from scratch. It covers the hardest parts of setting up a Python environment to run deep learning programs.

I was able to use this guide to get neural-doodle running on my Windows machine, with native access to the CUDA processing on the GPU.

https://github.com/Lasagne/Lasagne/wiki/From-Zero-to-Lasagne




The Case Against Artificial Intelligence

In the wake of recent AI news, perhaps now is a good time to take a step back and consider an opposing viewpoint.

Tariq Ali read the “How To Think About Bots” botfesto and thought that it was too pro-bot, with not enough consideration of the potential downsides of AI. Tariq is no stranger to artificial intelligence or bots; he’s participated in NaNoGenMo and created other story generation and AI projects. But he’s concerned about the potential downsides of AI.

In the essay, he takes a look at the potential for AI to cause unemployment, the “AI Winter”, technological dependence, botcrime, and the existential angst caused by AI outperforming humans. 

(Don’t laugh: the existential angst is one of the parts of the essay I found most persuasive, possibly because it involves the intersection between an underexamined pitfall and the very messy human psychology that happens when confronted by a world that doesn’t need you.)

And, unlike the pro-bot point of view, this essay was composed by a bot after being fed content with contextual categories:

So this blog post has been generated by a robot. I have provided all the content, but an algorithm (“Prolefeed”) is responsible for arranging the content in a manner that will please the reader. Here is the source code. And as you browse through it, think of what else can be automated away with a little human creativity. And think whether said automation would be a good thing.

http://tra38.github.io/blog/ai3.html







delacian:

I’ve ported my procedural cyberpunk city generator to Unreal Engine 4! You can watch the full video of the generation here.

Delacian is one of the generative developers I’ve been keeping an eye on. You might have seen the procedural brutalism GIFs, made with @delacian’s procedural modeling tools.

image

I’m just guessing, but I suspect that @delacian​ was partially inspired by Introversion’s canceled project. There’s not a lot of details posted about in-development Project Sprawl that these are a part of, but there’s still quite a bit there to admire.

The procedural modeling tools are worth talking about on their own, though. Procedural modeling in general is under-discussed. There are a lot tools out there that use generative techniques to make artists’ lives easier and let them create while still retaining flexibility, but they don’t get a lot of publicity. 

Internal game development tools are often fragile and not suitable for a public release, and a generative tool can be especially tricky. Especially ones that rely on the artists’ curation to avoid the bad outcomes or that require an extensive kit of parts to even begin to generate a result. There are some unsung heroes out there who have put in a lot of work to build tools we never hear about.

A generative tool that operates in real-time has the draw of something magical. When you can see how the building expands and grows, or how the city is laid out, that gives you a bit of extra context for how the system that built it operates. The process is as interesting as the end result. 

If you’re building a generative system, it might be worth including a preview of how the system is generating stuff during the loading process. Procedural generation is a magic trick, but like a good magic trick it means a lot more if you know that it’s happening. Directing the player’s attention to the strengths of the system can go a long way towards framing it for the observer.