Martin Evan’s Procedural City Generation For Dummies Series

Martin Evans is working on a stealth game set in a procedurally generated city. But what interests us today is that he’s been writing a lot about the technical details that go into that.

I first spotted this article on half-edge geometry, but there’s also discussion of using tensor fields to generate roads, lot subdivisionbuilding footprints via subtractive geometry, and even a side trip into galaxy generation.

Martin covers both the code he’s using and the reasoning behind his choices. I’m always happy to be able to point people towards explanations like this, because there seems to be a pent-up demand for accessible next-step tutorials.

Much of Martin’s code is open source and available to be used and learned from. This is an excellent resource if you’re looking for technical details of implementing generators.






XBRZPXL.EXE

If you’re familiar with retro gaming, you might have seen some of the pixel scaling algorithms that have been invented over the past few years. It’s usually been used as a post-process effect, to upscale old images. But now Ben Porter has created a small paint program that lets you directly paint with the XBRZ algorithm, making a new kind of pixel art much easier to create.

It’s called XBRZPXL.EXE, and there’s already a lot of incredible artwork that’s been made with the program.

The controls are rudimentary, but there’s a few tricks. Pressing ESC fills the canvas with the current color. Right clicking on an antialiased pixel will pick up that color. Pressing S saves the current image to ‘image.png’ (and overwrites your previous saved image! Make sure you copy it if you want to keep it!) 

XBRZPXL is a good example of how something that was previously an impersonal algorithm can eventually transition into being a controllable tool for artists to work with. Artificial intelligence doesn’t need to replace artists, it can give us new procedurally generated paint guns to use. Eventually, even complex algorithms like neural-doodle will be a part of many artists workflow.

The tools change, but the fundamentals don’t.

http://bp.io/post/1732

Have an image you’ve made with XBRZPXL, or some other procgen artistic tool? I’d like to see it!




CreativeAI.net

There is a lot of overlap between the stuff I talk about here and the stuff posted on CreativeAI.net. You’ll occasionally see me post here about stuff I saw there, and other people have posted blog posts from here over there. If this stuff interests you, you should probably check it out.

http://www.creativeai.net/




patrikhuebner:

Generative typography experiment using the Box2D physics framework.

Created with code, built with Processing.

Patrik Huebner’s Generated Typography

Switching away from the complexities of machine learning, here’s a relatively simple but very effective artistic use of code.

Patrik Huebner has done a lot of work with creative coding, creating art via code. This latest one uses Processing and a 2D physics engine to reinterpret typographical characters.

You don’t need to have lots of fancy new algorithms and AI to make computational art. Sometimes a simple idea is better.




JukeDeck

JukeDeck is a commercial project that uses machine learning to compose musical tracks. The tracks you create with it are available under a royalty-free license. I’m kind of fond of this one I made.

I’m most impressed with the ability of the algorithm to create an ending for its compositions. In my experience, that’s one of the trickiest parts where most generative music composers run into trouble. A lot of algorithms just stop in mid-thought, which sounds very wrong in music. JukeDeck doesn’t always complete the piece perfectly, but it often does a decent job.

Music has innate mathematical structure, and the different genres of music build more elaborate structures out of the basic building blocks. Which makes music one of the most potent generative fields: it’s easy for us to explore the many-layered patterns, and it can worm its way behind our logical analysis and touch our emotions.

It also makes music one of the hardest: we can hear when things sound wrong, and most of us have listened to enough music that even if you can generate a listenable track it’s all too easy for it to be boring or incoherent. To its credit, all the JukeDeck tracks I’ve listened to so far have been pretty varied and coherent.

The biggest downside of JukeDeck is that it’s a bit of a black box: hard for other people to build on top of, or to integrate into other things. What you get is what you get. 

More knobs to tweak would be nice, but ultimately being closed-source makes it less interesting to me, since I can’t tinker with it the way I’d like to. I imagine professional musicians would be better served with something that can output MIDI tracks rather than final arrangements. Still, if you need royalty-free temp tracks, JukeDeck has an infinite selection.




The pros and cons of procedural generation in Overland

I’m not in the paid alpha for Overland. (I don’t have time! I’ve got neural nets to practice painting with! Plus, Stellaris is coming out next week.) But I’ve been following the coverage of Overland with interest, because it’s a procedural post-apocalyptic road trip game. So this article on Gamasutra certainly caught my eye.

One reason I’ve mentioned Overland here before is that the developers have been pretty open about the challenges of using procedural generation–including discussing to downsides. 

I like the discussion of constraints and only using content in chunks of subsets:

“The other thing that helps address a lot of the potholes is to have a TON of ingredients but only ever deploy them in subsets. Spelunky is a fantastic example of how to do this super effectively. Definitely a big inspiration to us even though we’re doing more of a tactics thing.”

They also mention keeping the generation spiky instead of balanced, the usefulness of visual variety, and the addition of smaller goals along the way. They also discuss the difficulty of working on the system when it feels like progress is stalled.

But really, go read the entire article.

(Video via https://www.youtube.com/watch?v=ezLzLAO_Iic)




Yanko Oliveira’s Procedural Characters

Let’s get technical. You might remember Yanko Oliveira’s procedurally generated fairy chess, X, a Game of Y Z. Since then, Yanko’s been blogging about some procedural generation projects, including about the chess game and the thing I’m here to talk about this week: generating seamless character meshes.

A demon summoning prototype needs procedurally generated demons to summon, so Yanko has been working on procedurally generated characters. After a first pass in part one to generate the basic mesh and shader, there was a problem getting the modular mash pieces to join seamlessly. Yanko’s got a solution for that now, with a good walkthrough of the thought process that lead there and some Unity code illustrating some of the details.

Of course, now Yanko’s got to figure out how to rig and animate the characters. If you want to follow along, find some ideas for generating your own 3D characters, or have some suggestions for Yanko, I suggest you check it out:

http://gamasutra.com/blogs/YankoOliveira/20160502/271720/Your_problems_are_not_always_what_they_seam_combining_different_meshes_into_seamless_procedural_characters.php




FROM DADA TO JAVA: conversations about generative poetry & Twitter bots

A short documentary about bots and generative text that is very, very relevant to my interests.

Paul Kneale’s comments on intent touches on something I’ve been struggling with in all my talk about order and structure here. How do you give the reader the same sense of intent via an entirely generative process? The attempted answer I’ve been exploring is that structure and constraints substitute for intent. 

Or, perhaps, are the meta-language by which intent is conveyed. By creating a bot that speaks according to a system of constraints, or generates a world with a constrained set of relationships, we can create a dialog between the reader and the hidden, possibly algorithmic author.

The discussion of a different way of reading generative works, building an internal mental model interests me for similar reasons. The idea that the end of our reading is the moment of our understanding has a lot of resonance for me. (It would: I have a long allegiance to the idea that the basic unit of games and generative cybertext is the aporia and epiphany that resolves it.)

There’s a lot of other ideas crammed into the dense twelve minutes, and it’s one of the best introductions to generative text I’ve seen. Do watch it.




Neural Doodle texture synthesis (in a chart)

I ran an experiment with the neural-doodle texture synthesis I’ve talked about previously, taking five seed images I created in Tilemancer and running them against a lot of different style images. You can see the results in the chart above.

What interests me in these outcomes is that the results aren’t just different from the style image, but it some cases wildly different–while still being, for the most part, plausible images. If you didn’t know which image was the original, could you tell?

(Here’s the 19MB full-sized PNG chart)


Mimicking the Content

image

The settings in the chart were geared towards creating a result that resembled the style image. I also did some tests for things closer to the content image. Here’s the purple rocks tile:

image
image
image
image

And here’s some based on the floor tile:

image
image
image
image
image

This technology is advancing incredibly rapidly: in the short time since I did these, there have already been some promising new projects released, and the neural-doodle algorithm has had some significant improvements, like the addition of rendering larger images in segments.

The next six months or so are going to be a very interesting time.




The Fanfic Maker

Choosing a framing for a story-generator can have a lot of impact on how it is received. In the case of The Fanfic Maker, the frame is that it makes terrible fan fiction.

The stories it generates are not anywhere near as well-written or sophisticated as A Time For Destiny. Instead of aiming for an overindulgent but cohesive result, it is a generator of intentionally bad writing. On, in their words, “A site to generate awefull stories automatically.”

The responsible parties for this go by the name of Lost Again, a small group of art-game and educational-game makers. 

The advantage of aiming low is that it’s much easier to hit your target. This should not be underestimated: giving an AI an excuse helps set expectations and avoid the Eliza effect that happens when we expect too much. That’s why the “Eugene Goostman” bot pretended to be a 13-year-old Ukrainian boy, and why the 140-character limitation on Twitter bots is the kind of creative constraint that lead to an explosion of bots being created.

Working within the limitations of a medium can create a much stronger result.

As Matthew Weise has pointed out, many of the approaches that have been developed for delivering narrative in videogames stem from the developers at Looking Glass Studios being dissatisfied with the way conversations weren’t very immersive in Ultima Underworld. Thus, for System Shock they invented the audio log. For Thief, you’re hiding and eavesdropping: you have very good reasons not to participate in the conversations. Influenced by these and other innovations (particularly Half-Life) the present-day approach to telling a story with a game gradually developed by embracing the limitations of the interface.

While nowadays I believe that NaNoGenMo demonstrates that we can aim higher in our procedurally generated stories, it’s still a rhetorically powerful move to embrace a constraint. Constraints create structure.

Thus, the meta-joke that The Fanfic Maker runs on. It certainly doesn’t rise to the heights of the best fan fiction. Mostly because it deliberately turned and ran in the other direction, screaming loudly.

http://fanficmaker.com/