Generating Naming Languages

For NaNoGenMo 2016, Martin O’Leary created a fantasy map generator, complete with names in a generated language. Now, he explains how it works.

The basic idea is based on Mark Rosenfelder’s Language Construction Kit, though Martin’s current version is the result of further experimentation. It starts by generating some nonsense syllables constructed out of a set of vowels and consonants, and building up into phonotactics, introducing an orthography, and generating a morphology.

The result is a name generator that can create interesting, plausible-sounding names that sound similar enough to be part of the same language…and can go on to generate another language that has the same properties but sounds completely different. 

If you’re looking at generating a language, or even in just learning how to think through how a generator works, this project is a great example.

The code is on GitHub, in JavaScript and Python.

http://mewo2.com/notes/naming-language/




The Procedural Content Generation Workshop @ FDG 2016 

This year’s Procedural Content Generation Workshop is going on right now. Well, now-ish. The Foundations of Digital Games conference is in Scotland, and they’re just finishing their coffee break as I type this. 

The papers are online, in case you want to follow along or read up on the talks you miss.

(via https://www.youtube.com/watch?v=V52B2sjtOsU)




Fractal Nature

Photogrammetry can measure point clouds from photographs, creating 3D data. Mandelbulbs are fractals in 3-dimensional space. 

In this short film, Julius Horsthuis has combined the two, using a boolean of height maps sampled from photogrammetry of Amsterdam and 3D fractals created in Mandelbulb 3D. They work well together, creating a dream-like recognizability and illustrating that combining unusual inputs can create something greater than either source would have been on its own.

I especially like the wooden-looking statues, though there are a number of compelling images in the film where the aesthetics of the real city reach through into the mathematical realm.



orfs:

thestarrywisdom:

3dspacejesus:

xenoglyph:

mostlysignssomeportents:

The Lesser Bot is a twitterbot that is writing a machine-generated grimoire, complete with summoning runes, which is timely, given that we’re entering the age of demon-haunted computers.

“The Lesser Bot of Solomon offers you endless pages from a text in the style of Ars Goetia and the Pseudomonarchia Daemonum.”

№ 576 Chancellor AGMAHMAL, DEMON of:
‣ mordantly queued litheness
‣ subdivisions

№ 573 Senator SABAZMONUS, DEMON of:
‣ alcoholically positioning cutthroats

№ 571 Councilman BALZILSUM, DEMON of:
‣ presidential ascendancy
‣ whetstones
‣ preventive solos

№ 567 Saint BARETHOS, DEMON of:
‣ riposting footraces

№ 559 Deputy ABTZO, DEMON of:
‣ infected contaminators

№ 565 Saint ABAN, DEMON of:
‣ navigators

http://boingboing.net/2016/06/12/twitterbot-that-produces-endle.html

paging @kadrey

A grimoire for the information age, now ain’t that interesting.

@veequoi

@da-at-ass @wolvensnothere @hamfax probably everyone i know, really

@cowards-sorcery @rabbivole

The Lesser Bot

Multiple people have alerted me about The Lesser Bot of Solomon, a generated grimoire bot stylized after Ars Goetia and the Pseudomonarchia Daemonum.

image
image
image

As it turns out, the particular and specific nature of Demons, like saints, are discoverable through generative processes.











Prisma

Style transfer has gone mainstream: Prisma is a mobile app that can transfer styles onto your photos. 

So, now that literal style transfer Instagram filters are here, how do you feel about them? I’ve notice a couple of amateur artists who were taking aback at the computer getting a better result than they thought they could draw, while others have started experimenting rapidly.

Prisma did a very good job training their networks and picking good styles. Some combinations don’t quite work, but the fast turnaround time makes it one of the easier ways to experiment with style transfer.

image

Style transfer inherently has more variety than older filters since it reinterprets the image. A typical Instagram filter takes away information, hopefully focusing the composition. A style transfer can add information (from the source style) while also hopefully enhancing the interesting parts.

It’s not quite as flexible as I’d like for a professional tool, since the only choices are the style and the amount of blending. Don’t get me wrong: those are both really powerful levers. But I can immediately see possibilities in layering and combining multiple styles. 

I guess I’ll have to keep collecting my own toolset for now. Though I expect that someone will make a dedicated artist’s tool fairly soon. Or a Photoshop plugin.

image



JBrew’s Predictive Algorithm

This predictive text generation is an excellent example of what I like about the future of human-machine artistic collaboration. The way it works is Markov-chain-esque: it builds a weighted word list of the most likely word to follow the previous set of words, picks the top words, and the human chooses from that subset.

Neither half of the partnership can come up with the result on their own. The human gets enough input to steer the result from something comprehensible, while the machine dictates the constraints.

In addition to making the program, Jamie Brew has a blog with a ton of other outputs. Including dystopian car owner’s manuals:

image

And IMDB parental advisory information:

image

Jamie Brew is the head writer over at ClickHole, so this project is a pretty good example of human-machine collaboration on another level. The human, in this case, provides the sense of comedy. The machine provides the contextually-relevant chaos.

Each is doing the part that is hard for the other. Teaching the machine comedy is difficult, so the human does that part. And humans are really bad at being random and thinking outside their little boxes, so the machine handles that side.Creativity and coming up with new ideas involves learning new associations. Because the machine is outside our human neural architecture, it can free-associate ideas in ways that would never have occurred to us.

As it turns out, the comedy part involves directing the chaos into an ordered form. Which is interesting, because my naive thought would be to think of comedy as being inherently chaotic compared to the ordered seriousness of life’s drama. But maybe life is inherently chaotic, and by laughing at the chaos, comedy provides the order we need to deal with it.







The Worm Room 

The Triennale Game Collection is a set of interactive artworks for mobile devices. The Worm Room is everest pipkin’s contribution, a generative greenhouse that you can wander through endlessly.

image

It’s a quite lovely little place to visit.




deepjazz

A music post for your Friday: jazz compositions produced via deep learning. Created by Ji-Sung Kim in 36 hours at a hackathon, deepjazz uses a two-layer LSTM trained on MIDI files. It uses code from JazzML for preprocessing and Keras and Theano to run the machine learning on the GPU.

The Python code is on GitHub, under the Apache License 2.0.

(via deepjazz On Metheny … 64 Epochs)










Biomes

In biogeography, a biome is a set of regions that share similar plants, animals, and climate. Borrowing from this concept, Minecraft added biomes in Alpha 1.2.0, using them to control the variety in the terrain generation. Based on how the term has come to be used, I’d say that in procedural generation the term has come to mean “a region of generated content that shares a common generator or generator parameters”.

Real-world biomes are classified with the intersection of precipitation and temperature, sometimes graphed visually on a Whittaker diagram. Minecraft’s biomes use similar parameters to try to stay close similar biomes.

Aside from replicating real-world variety, this has two implications for the generation: 

First, it creates more variety. Having different rules in different places lets the results be wildly different, because the different regions don’t have to share the same procedures. A forest tree-placement algorithm can be different than a savanna tree-placement algorithm. 

Second, it gives a measure of consistency and structure. Deserts aren’t just different from forests, they’re different in predictable ways. That’s why, in Minecraft, if you need sand you know to look for a desert, not a jungle. Being able to learn patterns like that makes the world more understandable.

Minecraft isn’t the only game with biomes, of course. Dwarf Fortress includes biomes as a side-effect of its obsessively exacting world generation. Terraria uses them as part of its progression mechanic. Which is a good example of one way you can extend biomes further: by using them like levels in a metroidvania (or the way Lenna’s Inception’s uses the different regions as a difficulty progression). 

Another way you can use biomes is as one layer in a multi-layered set of generators. One way to look at a world generator that uses biomes as having a biome-placement generator that has specific generators nested inside it for each region type. That’s why biomes in procedural generation are more than just the ecological biomes: you can use the same concept to describe a generator that creates a specific type of city neighborhood. Your city generator can have a downtown biome, a waterfront biome, and a historical district biome, each with its own buildings and behaviors.

Indeed, one of the first biomes added to Minecraft was the Nether. And while the Nether is a unique biosphere, it doesn’t exactly fit on traditional ecological charts.







Swiss Turbulence and Terrain Generation

Terrain generation is one of the oldest intersections of computers and procedural generation–Mandelbrot and his colleagues noticed that many of the fractals were very landscape-like. But while a basic random terrain is easy to generate, a good looking terrain can take some effort. 

Despite the ubiquity of Perlin noise, raw Perlin noise doesn’t make for very convincing terrain. It’s easy for it to end up with frequency of detail that looks wrong. So when Loren Schmidt linked to this article series by Giliam de Carpentier it immediately caught my attention. Article 3 in particular discusses some extensions to the terrain generation pipeline that add calculated erosion and mix noises to create a much more convincing height-field.

Gillam originally worked on it for his thesis work and has released some source code for the algorithms in question, so if you’re looking to generate better terrain, go take a look.