Gen (& Sound Change Applier)

It’s easy to generate random combinations of letters, but that’s not the same thing as inventing new words. Languages have patterns and rules, which is why even the simple set of digrams that Elite uses:

char pairs0[]=
“ABOUSEITILETSTONLONUTHNOALLEXEGEZACEBISOUSESARMAINDIREA.ERATENBERALAVETIEDORQUANTEISRION”;

char pairs[] = “..LEXEGEZACEBISO”
              “USESARMAINDIREA.”
              “ERATENBERALAVETI”
              “EDORQUANTEISRION”; /* Dots should be nullprint characters */

…gets better results than just throwing random letters together.

If you’re trying to create a new language (and there are many constructed language enthusiasts) then you’ll probably want something that takes into account the deeper patterns in the language, like syllable types, non-English-vowels, sound changes, and so on.

There’s several generators for conlang creators to use as inspiration for new words that fit the patterns of their languages, but I’m just going to talk about one today: Mark Rosenfelder’s Gen, which goes together with his guide for creating constructed languages.

The settings for Gen’s generator reflect the language that it’s generating for. If a language (like Quechua) only allows syllables with a optional-constant-required-vowel-optional-constant pattern, the generator need to take that into account. Likewise, a language can have different sets of vowels: Cusco Quechua only has /a/, /i/, and /u/, while Danish has /ɑ a æ ɛ e i o ɔ u ø œ ɶ y ʌ ɒ/. This obviously affects the kinds of words that fit.

The Sound Change Applier is even more about the hidden processes of language, in this case the way that pronunciation tends to shift over time.

Once again, the hidden structure is an important source of order for the generator. This is a principle that can be applied to more than just language generation: what structure is important for the thing you’re trying to generate? Find that and you’ll find what’s important about the thing you’re generating.

That’s also why generators can be an effective form of rhetoric: by describing a process, it can also act as a critique of the system that process creates.

http://www.zompist.com/gen.html
http://www.zompist.com/sca2.html






Priorities in Generation & Generalizing from Examples

I finally got around to playing Invisible Inc this week (you don’t want to see the length of my to-be-played list) which prompted some thoughts about your goals while designing a generator.

The levels in Invisible Inc don’t even try to be realistic. They’re assembled from rooms and corridors in a videogame fashion: here’s the goal, there’s the exit, sprinkle in some guards and gated paths in between. It’s not simple: it knows to place rewards in dead ends, plan out guard patrol paths, and hide cameras in clever places.

The downside is that the logic of the levels isn’t obvious. There’s no real-world visible justification for the layout: any door can hide a prison cell or a teleporter exit. Your anticipation of what’s in the next room is limited to what you can survey, not on your learned expectations.

With the limitations of production, this was probably the right call. After all, this is part of the problem that brought down Introversion’s canceled Subversion: Subversion had some brilliant, realistic looking procedural building interior tech, but they never managed to crack the problem of making the gameplay fun.

Invisible Inc, on the other hand, knows exactly what it’s about, which let them radically rethink the stealth game. The buildings it generates might not be realistic, but they work for the gameplay.

I’d still like to see a deeper, more meaningful logic that can make more aesthetic layouts. But it has to be balanced against the primary goal of the level generator: to make playable levels.

Good underlying meaning can make better, more playable levels. But the trick is that it needs to work. Fortunately, different projects have different goals, and the cutting-edge research into procedural generation can exploit those different goals to triangulate new places to experiment.

image

Working from Examples

I’ll probably have more to say about Invisible Inc itself once I see more of it. My current reaction is based on a handful of generated levels, which points to another issue I continually confront: it can be hard to generalize from a handful of examples.

When I’m playing a game it takes a while for me to be sure that I’ve seen enough variation to say that I can feel the contours of the generator. Or, in some cases, that there’s generation going on at all. The same is true when I’m building a generator: since I can read the code, I can guess at the outcome, but I can’t know how much variation Virgil’s journey actually has without extensive experimentation.

Some people are working on ways to address this. Michael Cook’s Danesh, for example, is a toolkit to bring better analysis to designing generators, with testing, visualization, constraints, and automatic searches for variable settings.







Unknown Peoples

A twitter bot by @mousefountain (who has been mentioned here before) Unknown Peoples describes cultures and peoples in brief snippets, offering a glimpse into a terse travelogue.

The generation here is quite effective: it’s easy for this style of bot to be dull, as repetition wears the edges off the clash of striking imagery. This generator manages to strike an effective balance.

The additional content by Tanya X. Short and Jason Grinblat no doubt helped: they both have extensive experience writing for generators. A strength of having multiple writers work on a text generator is that diverse can expand the generation space.

https://twitter.com/neighbour_civs

The bot also answers questions:






Cogmind’s Seeds

People have asked me about random seeds before, but I always like being able to point people towards more resources. Particularly when it’s a write-up like this one, where Josh Ge has provided a slew of technical details about how Cogmind uses seeds, where some of the pitfalls are, and many applications of being able to store seeds.

Being able to repeat the conditions that lead to bugs is, as Josh points out, a very useful feature of storing the seed. Cogmind also stores seeds for each pile of loot (enabling it to generate treasure on-the-fly and have it always be consistent).

There’s also a rather clever naming system for the seeds, a discussion of how sharing seeds can help a community form, how to plan for a replay feature, how to save large worlds using seeds, and the interesting idea of Brogue’s seed catalog.

http://www.gridsagegames.com/blog/2017/05/working-seeds/

Like Josh, I’m also interested finding more ways to use seeds. Do you know any other ways that you can use random seeds?










Phase-Functioned Neural Networks for Character Control

For most games with animated 3D characters, a lot of time is spent linking the animations with the character’s motion, often using complex state machines that create webs of animation transitions.

This research, by Daniel Holden, Taku Komura, and Jun Saito, instead used a neural network to act as the character controller. The neural network is trained on a large dataset of animations and terrain data, taking gigabytes of data and combining it into a function that runs quickly and uses only a few megabytes of memory.

There’s been some past research in this area, but based on the video of their results their phase-functioned approach is very, very effective.

This is the exact kind of generative tool that can empower artists. It still needs the artistic input (that animation data has to come from somewhere) but it takes care of the very tedious work of combining all of those animations, freeing the artists to produce even more art. (And the technical artist can go improve some other tool.)

image

And, since the training is offline, rather than while the game is running, the risks of training a neural net can be supervised, so the game can ship with just the resulting locked-in function.

http://theorangeduck.com/page/phase-functioned-neural-networks-character-control




Font Map

There are a lot of fonts out there, and there isn’t always a good way to see which fonts are similar to the look you’re going for without doing laborious searches. Inspired by previous machine learning projects, developer Kevin Ho decided to map a collection of fonts to a unified visualization. Thus, this 2D T-SNE project of the font manifold.

I like the practical application here: for graphic designers, finding a font that has the right style but isn’t over-used is a common task. Or finding one that’s similar to one style, but just a bit different. So a tool to help with that is welcome.

When I talk about how generative approaches can help artists and designers, this is exactly the kind of thing I’m talking about: tools that let us see relationships that would otherwise be invisible.

Can we extend this approach to other graphic tasks that could use a better interface for viewing potential outcomes? For example, what about a map of the different outcomes for a shader’s settings? Or a visualization for picking models in a Bethesda-style modular kit. Or a map of outcomes for different settings for a procedural generator, a bit like some of the visualization Mike Cook’s Danesh is doing. (Bonus points if you manage to use the manifold to add interpolation.)

http://fontmap.ideo.com/

https://medium.com/ideo-stories/organizing-the-world-of-fonts-with-ai-7d9e49ff2b25




I’ve just released an application for macOS called “Resonant Element” that uses an array of generative techniques to produce an endless stream of high-quality music in a variety of genres. One of the primary goals of the project was to try to create a system that would be able to pass the Musical Turing Test, and early indications are that this is a pretty reasonable attempt!

The system uses stochastic models to guide chord progressions, select subsequent notes (both pitch and duration) in melodies, select performance techniques to use to interpret the musical structures, and many other aspects of the generative process.

presented a paper describing the internals of the system at the 19th Generative Art conference in December of last year, though in the months that followed, I made many improvements and modifications to the structure of the system, with many more to come!

(submitted by B.T. Franklin)

Thanks for the submission! I like seeing where generative audio is going, since it’s not my strong suit. This seems to be an especially cohesive generator, which I find to be harder to pull of in music than in most other generator types.






A Generative Approach to Simulating Watercolor Paints

Tyler Hobbs is a generative artist, using Quil to create images. Tyler’s write-up of this watercolor-paint-generation is effective both as a walkthrough of the artistic process and as a handy approach that you can borrow ideas from.

The basic idea is to take an irregular shape and composite a lot of slightly-varying nearly-transparent copies in a stack. There’s some nuances to Tyler’s approach, which results in the gorgeous look you can see above.

http://www.tylerlhobbs.com/writings/watercolor














Voxel Automata Terrain

A voxel cellular automata terrain generator by R4_Unit, written in Processing.

It uses the diamond-square algorithm, noise generation, and cellular automata rules to create voxel spaces.

A very general overview of how it works:

The basic idea is that you pretend you’ve drawn a certain level of detail of the image into a grid of voxels and you are now wondering how to make a twice as detailed image. You do so with a collection of rules.  First, you have a rule for how to fill in the center of a cube when you know all the corners, second you have a rule for filling in all the faces when you know the corners and centers, and then finally you have a rule for filling in the edges when you know everything you’ve filled in so far.  In this way, you can go from a voxel grid to one twice the size.  Repeating this many times gives you a large detailed voxel grid.   

Really detailed images are a bit slow, as you’d expect with a recursive generator like this, but the results are quite nice. And, of course, it’s all in Processing, so you can see the code and edit it yourself, if you like.

https://bitbucket.org/BWerness/voxel-automata-terrain/









The Signal From Tölva - Automating a Pipeline

Unlike Big Robot’s last game (Sir/Ma’am You Are Being Hunted) their new game The Signal From Tölva doesn’t procedurally generate the map. But their pipeline still used generative tools. You can see the result in the textures in the screenshots above.

Many games and films make extensive use of generative methods behind the scenes. Unless you know how something was made, you might not suspect it. Fortunately, Olly Skillman-Wilson has written a blog post explaining the details of the Tölva texturing pipeline.

http://www.big-robot.com/2017/01/24/art-signal-from-tolva-ian-mcque/