Waillee by Prismbeings (4k intro)

Have a demoscene production, by Prismbeings. It’s pretty amazing what they crammed into 4 kilobytes. Moreover, it’s a good example of how the right framing and context can give a piece structure. It uses some fairly straightforward procedural techniques, but it uses them well. You can, as it turns out, get a lot of mileage out of structured-noise heightfields with the right context. And then the music and the rest of the audio help tie a bow on it.

https://www.youtube.com/watch?v=CDDedA-x2-k




Looping with Noise

This is a trick I’ve known for ages from VFX work, so I really like this through explanation by Etienne Jacob (who has an excellent Tumblr blog of generative stuff). The post also explores a bunch of different things that you can draw with noise. Perlin (and Simplex) noise is useful for more than just the heightfield maps you may be used to!

Golan Leven has made an animation that demonstrates the basic idea behind the looping animations even more directly. (And provided source code for that too.) Like I said, this is one of my favorite noise tricks, and it works with any application of noise, so long as you have noise that’s at least 1 dimension higher than your output. And while circles are easiest (just rotate around a point) you can use any closed figure or repeating loop.

https://necessarydisorder.wordpress.com/2017/11/15/drawing-from-noise-and-then-making-animated-loopy-gifs-from-there/

https://gist.github.com/golanlevin/46a8fd29114f7a9f41345a9f0ccfd059








Procedural Worlds from Simple Tiles

This approach for generating maps from tileset definitions caught my eye. I’d describe it as WaveFunctionCollapse-adjacent, though Isaac Dykeman instead explains this generator in terms of Wang Tiles. Dykeman describes the tiles with a visual specification, with metadata about each 3x3 tile encoded in the image itself.

Besides that, the main distinction is that the basic unit it uses is a tile, not a pattern, making the output a more direct representation. This leads to an optimization based on the transitions from one tile type to the next, speeding the calculation of the effect that placing a tile will have on the rest of the map.

The code is on GitHub, if you’d like to take a look for yourself.

http://ijdykeman.github.io/ml/2017/10/12/wang-tile-procedural-generation.html

https://github.com/IJDykeman/wangTiles




Generating Maps With Photoshop Actions

You can, of course, generate things with recorded Photoshop actions. One artist who uses the handle Yo-L did just that. They posted the actions and patterns used, though they require Photoshop CC so this exact set of recorded actions won’t work on earlier versions.

The principle is sound, though, and you can use the concepts for all kinds of image generation. Remember, Photoshop is just acting on a 2D array of numbers. A generator that used a node-based interface or is coded directly by writing functions can do the exact same math.

It’s sometimes worth hauling out an image editor and seeing what an operation will look like as you’re trying to reason through the inner workings of your generator.

Some useful operations:

Curves and ramps:

By applying a gradient (or more complicated ramp or curve) to an image we can create a wide variety of effects.

image

Clouds, curves, threshold:

This is just a curve layer and a threshold over the cloud filer (which is itself a form of value noise) but it creates very organic transitions:

image

Adding detail with a blend mode

Need to add detail to a coastline? Here I’m screening a noise layer over a smooth threshold-based coastline to add detail. “Screen” in Photoshop is just inverting the layers, multiplying them together, and then un-inverting the result. Just like the filters, all Photoshop blend modes are just math

image

What about removing detail?

Here I’m using a Gaussian blur as a low-pass filter to remove the details from the underlying value noise, which results in a smoother look. If you’ve got a generator that’s too noisy, try filtering out the frequencies that you don’t want.

image

Distortion!

Alternately, maybe you just want to distort your image a bit and break up its regularity. Remember this post about Íñigo Quílez’s use of warping? Photoshop’s Distort filter does the same thing:

image

There’s a zillion more things you can do with this (look up image kernels in particular). If you’ve got a generator that is putting out results that are close to what you want but not quite there yet, consider doing some post-processing on it.

I’d love to see other people’s examples of simple post-processing techniques, whether implemented in code or in Photoshop.

Yo-L’s Photoshop map generator: https://imgur.com/a/HWXd0#Cpg7RUl




Procedural Aircraft Design Demo

I recently came across this really cool project to build an airplane generator. Denis Kozlov made it in Houdini, creating a web of nodes that transforms the input parameters into fully-realized 3D models, complete with twisted rivets and weathered textures.

It’s a great example of a complete generation pipeline, using a parametric approach to give a designer control across every step of the generation while taking care of the grunt work and making sure everything connects properly.

Tools like these are the exact kind of things that I think are important for artists. Automation can help artists produce work faster. But, more than that, it helps artists make work better: by taking care of the little details like making the curves fit together, the artist can concentrate on the design aspects that the computer can’t handle.

Going further, it lets the artist think in larger concepts. Instead of thinking in units of vertexes or pixels, the artist can think in terms of “a wing” or “how the curve of the fuselage flows with the tail.”

It’s also an opinionated tool: it just makes airplanes, and a specific type of airplane at that. This lets it be really good at the thing it is interested in and introduce variation where it counts. Generative systems that do specific things can be better than trying to do everything, because they can concentrate on the choices that have maximum effect.

http://www.the-working-man.org/2017/04/procedural-content-creation-faq-project.html

http://www.the-working-man.org/2015/04/on-wings-tails-and-procedural-modeling.html




How Generative Music Works

This presentation by Tero Parviainen is a great look at the history and future of generative music, but it also has interactive examples of many of the generative systems it talks about.

I appreciate the look at generative music (and Tero’s earlier talk on a similar subject) but more than that I want to point out that this kind of interactive presentation is an ideal way to learn about these systems. Having the opportunity to see the generators in action and perhaps play with them is a powerful way to internalize what they are and how they work.

https://teropa.info/loop/#/title




Discovery Scanner 1 - Creating a Galaxy with Dr Anthony Ross 

I’m very fond of behind-the-scenes looks at how generative systems work. Because of their nature, it can be hard to make definitive statements about aspects that aren’t clearly showing their structure.

So I’m quite happy that the Frontier Developments team have started doing livestreams where they talk about how the game works, particularly server-side things that you can’t observe directly in the game. Like how the galaxy generator works.

While there have been dozens of attempts to simulate galaxies (going right back to Frontier: Elite) it is harder than it looks. The sheer scale of the generator poses some surprising issues getting the math to work. To say nothing about making it look good.

The generator in Elite: Dangerous goes way beyond the rolling-on-tables approach that the original Elite pioneered. While I’m aware of some other attempts to simulate planetary formation, this is the only one I know of that made it into a finished game. (Other than, I suppose, Universe Sandbox.)

The video is a great behind-the-scenes glimpse at how the different systems interact to create a history and context for the stars, and then use that history to directly influence the planets that get created. Which is how the generator is able to come up with so many physically plausible but entirely surprising star systems.

I doubt that a human, trying to implement an expert system for generating planets would come up with quite as many interesting features in these combinations. Double-planet pairs, complex ballets of multi-star systems, moons with orbits measured in minutes: there are a lot of functional-according-to-the-simulation things floating out in space that a designer would have tossed as implausible.

That’s one of the advantages of using a simulation or something like a neural net to generate to a spec: you have the advantage of the machine being willing to try off-the-wall creative ideas without caring if they sound plausible, as long as they fall inside the constraints. The trick is to specify the metric you actually want to measure.

Things like the Elite: Dangerous galaxy generator demonstrate how the computer can be an effective creative partner. While it can be very powerful to directly author the probabilities in a generator for a specific rhetorical end, it can be equally powerful to let go of the exact shape of the generation and let it settle in to a shape that the computer finds for you.














Worldbuilding and Procedural Generation

I recently gave a talk at Dakota State University about procedural generation and worldbuilding.

The basic thesis is that worldbuilding is about constructing imaginary worlds directly out of ideas, and so we don’t need to limit ourselves to generating the artifacts of a world–the maps and encyclopedias.

Instead, we can directly expresses ideas about the world through the structure of the generator that defines it. After all, the greatest expert on what dwarf in Dwarf Fortress is like is found in the source code of Dwarf Fortress.

I used the occasion to talk about the thinking I’ve been developing around the poetics of generation, building off the inspiration I’ve found in NaNoGenMo (not to mention Emily Short’s work).

You’ll recognize more than a few examples I’ve talked about here on the blog, as well as several that I haven’t written about yet. I view it as one small step toward an aesthetic theory for working with generative systems.

While I don’t have a recording of the talk at hand, I have put the slides online. Plus my notes, such as they are.




When Humans are Parasites on Algorithms

You know, I’d hoped to completely forget about these things, but apparently they’ve gotten worse–or I just wasn’t fully exposed to the true horror.

If you’re not aware, writer and artist James Bridle recently wrote a post called Something is wrong on the internet.

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.

I’d suggest reading it, though maybe don’t watch the linked videos. Which are comparatively mild examples.

It’s still not entirely clear to what extent the creation of these things is automated. Some involve real humans singing finger family songs to match bizzare combinations of keywords. Others are clearly either automated or produced in sweatshops–the dirty secret of AI is the extent to which a lot of it is just an update of the Mechanical Turk.

But the crucial point that I missed is that there is an obvious automated system behind all of this: YouTube’s algorithms. They’re not very smart, even as algorithms go. Though they’d be classified as AI, it’s more like the intelligence of a bacteria, or perhaps a very dim sort of worm, like the kind that crawls on a sidewalk after the rain and dies when the sun comes out.

This is an important point: when we say “Artificial Intelligence” we often think of a complex, almost magical intellect…but most of the real AI work is merely blindly optimizing for a particular goal. AlphaGo is very good at playing Go, and absolutely useless at anything else. The interesting part, for researchers, is figuring out how to transfer the principles behind the AI to solve other problems.

In YouTube’s case, the algorithm is blindly trying to find content that’s vaguely associated with the thing you just watched. As it turns out, when you combine it with human toddlers, that leads down a dark, dark hole as their uncritical viewing reinforces the algorithm. The wet sidewalk has way more water than the damp earth, the worm doesn’t realize the danger, we get a YouTube run by undead zombie worms showing children content they’ll never forget, much like this metaphor.

So we’ve created an economic ecosystem where humans are literally performing incantations to fulfill what they think an AI wants because an interaction between an AI and babies has gotten stuck in a local maxima.

Mike Cook has a good response for this: Better Living Through Generativity:

Photography’s slow walk to ubiquity also had a darker side. As it became better-known, photography was understood to be a way to record real events, but this imperfect understanding enabled a lot of people to do fairly awful things with it. One of its first proponents faked his own death. People used it to prove the existence of fairies, to show that they could summon spirits, to verify the existence of monsters and mythical animals. People knew enough about photography to benefit from it, but not enough to protect themselves from it.

What solved this problem? Well, a bunch of things, but undoubtedly one of those things was helping people understand the processes by which these images were made, and giving them the power to make them. Cameras and development became more commonplace, people understood how to overlay images or touch them up after the fact. We see the same cycle today with Photoshop: first, it caused chaos; then people understood it; finally, people harnessed the power for themselves. Now we edit photos on our phones as we take them, before sharing them with others.

I see PROCJAM as part of an effort to enact this change for generative software. By bringing people the resources, the tools and the space to make generative systems, they can take ownership of the concept and understand their strengths and their weaknesses. Right now only a few hundred people enter PROCJAM, but ultimately we should all be working to make these ideas accessible and fun for people to try. In doing so, we popularise these ideas and rob them of some of their incapacitating power.

Go read the whole thing.

This won’t solve the problem of YouTube–in the end, Google is responsible for what their algorithm has done. But I do think that one of the best personal defenses against this is to learn what procedural generation and AI are really capable of and–more importantly–what they can’t do.

This stuff isn’t magic, even when it looks like it is.




Bottery

In exciting news for bot-makers, Kate Compton’s advanced bot-making agent platform was just open-sourced by Google.

You can play with a live demo here.

Kate’s earlier project, Tracery, already powers thousands of bots and other generative things. The combination of ease of use and expressive power opened up grammar-based generativity to everyone and lead to an explosion of creativity.

Bottery takes the concepts of Tracery and puts them within a Finite State Machine, greatly increasing the expressive power available. While some advanced Tracery features add the ability to remember some state, using FSM–which are all about state–makes previously difficult things trivially easy. And with an accessible interface, too.

I fully expect that, given time, Bottery will be as significant for bringing AI to all of us as Tracery was. Meanwhile, you can use it for your ProcJam project. Or, with a bit of work, for your NaNoGenMo project!

There’s going to be an online launch party on the 8th, where Kate will be streaming and doing tutorials.

https://github.com/google/bottery