BachBot

I’ve talked about procedurally generated Bach before, but this research project is using a Bach-based Turing test to train their LSTM.

After you take the test, there’s a bunch of sample tracks to listen to, plus the code is on GitHub if you want to generate your own Bach-style compositions.



A sideview of a procedurally-created urban plot

A top view of multiple procedural plots combined into an urban megaplot

Random Walk Through Raster (1975) by Frieder Nake, translated into Processing and 3D

Generative metacity with combined plots generated with Processing, from the

Structures & Stratifications

This 2015 project from a class at the Architectural Association Visiting School Shanghai uses Processing and Rhino/Grasshopper to algorithmically design urban plots. Taking inspiration from the pioneering computer art of Frieder Nake and Vera Molnár, they individually developed generative plots which were combined into larger megaplots.

The results are visually striking. There’s several things of note here: working algorithmically meant that the models were flexible and could be updated over time. Procdural generation has a lot of applicability for architecture: the flexibility speeds prototyping and makes it possible to consider more factors in the design. And the inspiration from early computer artists suggests that there’s a wealth of early work to take inspiration from.

http://superarchitects.world/portfolio/structures-and-stratifications/




Towards The Automatic Optimisation Of Procedural Content Generators

A talk by Michael Cook, mostly about Danesh, but also about some of the reasons behind what Danesh is trying to do.

It’s not just for looking at the simple output of your generator. It also lets you track metrics for understanding the expressive range of the generator, explore randomized exploration of the parameters, and even have it search for targeted metrics.

I want to see more tools like this, and not just for straight procedural generation. It’d be really handy to have something like this next time I have to develop a new shader or material. Or test style transfer settings.

Danesh can be downloaded from its github repository.

(via https://www.youtube.com/watch?v=bQuaOVTxoNA)




A 256-Character Program to Generate Poems  (2008)

In 2008, Nick Montfort wrote a poem generator for the new year. In 256 characters of perl code.

This is the entire source code of the original version of the generator:

perl -le'sub w{substr(“cococacamamadebapabohamolaburatamihopodito”,2*int(rand 21),2).substr(“estsnslldsckregspsstedbsnelengkemsattewsntarshnknd”,2*int(rand 25),2)}{$l=rand 9;print “\n\nthe ”.w.“\n”;{print w.“ ”.substr(“atonof”,rand 5,2).“ ”.w;redo if $l–>0;}redo;}’

It makes slightly more sense if you understand Perl, though there are some very clever optimizations going on here. Like the string “atonof”, in which every pair of adjacent letters forms a valid English word that can link the longer, slightly less coherent words generated with the w() function. 

You can read more about how it works at Nick’s blogpost about it, including the connection to cut-ups and fragments. I found the discussion of the struggle to balance the quality of the output with the diversity to be very relatable. 

I rather like some of the words that come up when you smush two of the four-letter nonsense words together:

image

Nick would go on to make a series of 256-character poem generators. There’s an updated version of the first generator, plus six others.

A reminder for all of you aspiring generative artists out there: Generators don’t have to be big things. Sometimes they can be quite small.






A Long History of Generated Poetics

When we talk about the history of generative text, we often talk about cut-ups. But while Dada frequently gets referenced in connection with it, it’s only one small link in the chain. In this talk, everest pipkin takes a look at the prehistory of fragmented generative text.

The talk goes over Emily Dickinson’s use of scraps of paper for transcribing poetry; the 12th century Japanese Honkadori; the literary structure of pre-imperial Chinese court argument; and Melitzah, a medieval Hebrew literary device that reused fragments to create new meaning.

Drawing a parallel between the reuse and generative works dependance on their corpus, everest pipkin goes on to explore how the structure of the source text creates the generative voice, and how that plays into our role as creators and readers:

So often when I talk about my work to writers unfamiliar with generative techniques, they joke that they will soon be out of a job. It is in jest, but it happens so often I can’t help but feel it comes from a place of genuine panic. I try to explain; sure, my tools can reproduce the right structures but they don’t really ‘get it’, meaningful output or not. Any moment of delight or clarity is on my end- the creative act here is in the reader.

When I say that the creative act is the reader’s, I imply the creator as well as the audience. When working with generative text, it is impossible not to read. One has to look for bodies of text that can function as useful sources for tools; big enough, or concrete enough, or with the right type of repetitive structure; learnable. And then one has to read the output of such machines, refining rules and structures to fix anything that breaks that aura of the space one is looking for. In this, we are not unlike the medieval scholar who studies holy verse to become fluent enough in that space that it becomes building block.

To ask if a machine could really understand the Torah, or for that matter- the importance of an Emily Dickinson recipe or a cutup newspaper poem or Wikipedia or emoji is not a question for now. That is, at this time, still our job; our machines stay tools like any other. What we are learning is what carries through, what we can teach our machines, and what lends them strength. The delight is all ours.

This gets at so much of what I’ve been trying to say here, over the past year and a half. I encourage you to go read the whole thing: https://medium.com/@everestpipkin/a-long-history-of-generated-poetics-cutups-from-dickinson-to-melitzah-fce498083233








CK2 Generator

Crusader Kings 2 is one of my favorite games, but despite the many excellent examples of interactive narrative it doesn’t have a lot of procedurally generated content. Until this tool.

It’s not really a mod: it’s an entire Dwarf Fortress-style history and map generator that creates custom mods. 

Which is how I found myself leading the mighty Qisuigi Empire at the height of their struggles with the merchant republics, in Betahibr Kings II. (Renaming the Crusades is a nice touch.)

The generator doesn’t just create a new map and new kingdoms: it creates an entire history. Complete with family trees and dynastic intrigue for you to explore in the game.

You can, in the traditional CK2 style, choose to jump into this history where you like, as whichever character you want to play. It’s probably even more important than usual that you start with something manageable: the historical game at least has familiar geography to latch on to. 

Here, the generator creates entire cultures out of whole cloth, so it may take you some time to familiarize yourself with basic things like the ranks and titles in nearby counties. Pay close attention to the introduction at the start of the game, because it’s your chance to learn that the Orai is the one who can call for an Ahakaya.

The generator is still under active development, with additional features being added. It’s complete enough to start a new game with it right now, though.

Crusader Kings 2 was already a interesting simulation, but this plunges you right into your own private fantasy setting. After Dwarf Fortress, this is the easiest way to generate the plot and setting for your next fantasy novel series.

Now if you’ll excuse me, the fifth holy city of the Kahalier is still outside the realm, and I need to convince the Orai that now is the time to call for an Ahakaya…

http://ck2generator.com/




Roguelike Celebration

Last weekend, a roguelike event was held in San Francisco. A bit more player-focused than the International Roguelike Developer’s Conference, there were nevertheless a bunch of talks by a bunch of great designers. The recorded streams of the talks are available here and here

Among others, there were talks by the original developers of Rogue; the Adams brothers; the developers of Brogue, Cogmind, Kingdom of Loathing, Caves of Qud, ADOM, and a whole lot more.

There’s about sixteen hours of footage there, and I haven’t watched it all–though from reports some of the talks might be interesting enough for me to do a specific post on them in the future.

And meanwhile, if you are trying to figure out if your game is a roguelike, Ben Porter has you covered.













Generating Videos with Scene Dynamics

What if a computer could predict the future? Show it a still photograph, and it’ll try to guess what follows? Admittedly, these hallucinatory videos show that the computer has a long ways to go before it can produce realistic results…but it does kind of work.

The general idea behind this research is to use adversarial training to train a neural net to recognize how a scene can change. There’s two networks here: a generator network that tries to assemble a video based on massive amounts of unlabeled video, and a discriminator network that tries to guess which videos are fake. 

That’s the basics. The research (by Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba) involves a lot more details, like exactly how they built their model, or how it works even better if you explicitly separate the stationary background. But the most significant part of the research is probably that they got these results by using unlabeled video as the training source, enabling them to automate more effectively and acquire massive amounts of input data via Flickr.

I admit, though, that I’m mostly posting these for the images themselves, rather than their future potential. No matter how the tech develops in the future, the transitional stage is interesting in itself: I don’t know of any other method that produces video quite like this. 

While an effective future-predicting algorithm has a lot of practical uses (such as frame interpolation for film editing) I’d be interested in seeing this hallucinatory style scaled up to production resolution.

Dreamlike imagery is quite hard to approximate: film VFX gets a lot of mileage out of fluid and smoke sims, and there’s some morphing techniques, but you’d really have to go back to optical processes to find an effect like this. Tarkovsky would have found it useful for Solaris or Stalker

http://web.mit.edu/vondrick/tinyvideo/




ProcJam 2016 - Art Packs

With ProcJam just around the corner, this year’s art packs have been released. Developed with funding by PROSECCO and Falmouth University, this year’s packs include hundreds of sprites by Tess Young and a huge collection of 3D items by Khalkeus. (Plus last year’s art pack by Marsh Davies.) They’re licensed CC BY-NC, so you can use them in non-ProcJam projects too.

I’m expecting lots of exciting projects this year. Go make something with them!




The Witch Who Came From Mars

AI-written, human performed drama has been in ascent. The latest episode of the podcast Flash Forward adds another one to the list. Rose Eveleth asked  Mike Rugnetta (host of Idea Channel) help create a script via neural net.

What’s particularly interesting about this neural-network-authored drama is that they take some time to analyze it afterwards.

There’s an element of pareidolia in this, of course, as the listeners find meaning in something written by a neural network that has little context for what it was creating. But, like many creative uses of neural networks, it is good at picking up on patterns. Even if we can’t say that it understands them, it does recognize them. There is a structure there, the structure created by the unconscious patterns of the human writers it learned from.

I think intentionally exploiting pareidolia for creative inspiration is one of the practical artistic applications of generative text and poetry. That’s one use for random-thing-generators: generate a cult, a city, or a dungeon and then tell your own story or draw your own art based on it.

http://www.flashforwardpod.com/2016/09/05/episode-20-something-martian-witch-way-comes/