This 2015 project from a class at the Architectural Association Visiting School Shanghai uses Processing and Rhino/Grasshopper to algorithmically design urban plots. Taking inspiration from the pioneering computer art of Frieder Nake and Vera Molnár, they individually developed generative plots which were combined into larger megaplots.
The results are visually striking. There’s several things of note here: working algorithmically meant that the models were flexible and could be updated over time. Procdural generation has a lot of applicability for architecture: the flexibility speeds prototyping and makes it possible to consider more factors in the design. And the inspiration from early computer artists suggests that there’s a wealth of early work to take inspiration from.
Towards The Automatic Optimisation Of Procedural Content Generators
A talk by Michael Cook, mostly about Danesh, but also about some of the reasons behind what Danesh is trying to do.
It’s not just for looking at the simple output of your generator. It also lets you track metrics for understanding the expressive range of the generator, explore randomized exploration of the parameters, and even have it search for targeted metrics.
I want to see more tools like this, and not just for straight procedural generation. It’d be really handy to have something like this next time I have to develop a new shader or material. Or test style transfer settings.
It makes slightly more sense if you understand Perl, though there are some very clever optimizations going on here. Like the string “atonof”, in which every pair of adjacent letters forms a valid English word that can link the longer, slightly less coherent words generated with the w() function.
You can read more about how it works at Nick’s blogpost about it, including the connection to cut-ups and fragments. I found the discussion of the struggle to balance the quality of the output with the diversity to be very relatable.
I rather like some of the words that come up when you smush two of the four-letter nonsense words together:
The talk goes over Emily Dickinson’s use of scraps of paper for transcribing poetry; the 12th century Japanese Honkadori; the literary structure of pre-imperial Chinese court argument; and
Melitzah, a medieval Hebrew literary device that reused fragments to create new meaning.
Drawing a parallel between the reuse and generative works dependance on their corpus, everest pipkin goes on to explore how the structure of the source text creates the generative voice, and how that plays into our role as creators and readers:
So often when I talk about my work to writers unfamiliar with generative techniques, they joke that they will soon be out of a job. It is in jest, but it happens so often I can’t help but feel it comes from a place of genuine panic. I try to explain; sure, my tools can reproduce the right structures but they don’t really ‘get it’, meaningful output or not. Any moment of delight or clarity is on my end- the creative act here is in the reader.
When I say that the creative act is the reader’s, I imply the creator as well as the audience. When working with generative text, it is impossible not to read. One has to look for bodies of text that can function as useful sources for tools; big enough, or concrete enough, or with the right type of repetitive structure; learnable. And then one has to read the output of such machines, refining rules and structures to fix anything that breaks that aura of the space one is looking for. In this, we are not unlike the medieval scholar who studies holy verse to become fluent enough in that space that it becomes building block.
To ask if a machine could really understand the Torah, or for that matter- the importance of an Emily Dickinson recipe or a cutup newspaper poem or Wikipedia or emoji is not a question for now. That is, at this time, still our job; our machines stay tools like any other. What we are learning is what carries through, what we can teach our machines, and what lends them strength. The delight is all ours.
Crusader Kings 2 is one of my favorite games, but despite the many excellent examples of interactive narrative it doesn’t have a lot of procedurally generated content. Until this tool.
Which is how I found myself leading the mighty Qisuigi Empire at the height of their struggles with the merchant republics, in Betahibr Kings II. (Renaming the Crusades is a nice touch.)
The generator doesn’t just create a new map and new kingdoms: it creates an entire history. Complete with family trees and dynastic intrigue for you to explore in the game.
You can, in the traditional CK2 style, choose to jump into this history where you like, as whichever character you want to play. It’s probably even more important than usual that you start with something manageable: the historical game at least has familiar geography to latch on to.
Here, the generator creates entire cultures out of whole cloth, so it may take you some time to familiarize yourself with basic things like the ranks and titles in nearby counties. Pay close attention to the introduction at the start of the game, because it’s your chance to learn that the Orai is the one who can call for an Ahakaya.
The generator is still under active development, with additional features being added. It’s complete enough to start a new game with it right now, though.
Crusader Kings 2 was already a interesting simulation, but this plunges you right into your own private fantasy setting. After Dwarf Fortress, this is the easiest way to generate the plot and setting for your next fantasy novel series.
Now if you’ll excuse me, the fifth holy city of the Kahalier is still outside the realm, and I need to convince the Orai that now is the time to call for an Ahakaya…
Last weekend, a roguelike event was held in San Francisco. A bit more player-focused than the International Roguelike Developer’s Conference, there were nevertheless a bunch of talks by a bunch of great designers. The recorded streams of the talks are available here and here.
Among others, there were talks by the original developers of Rogue; the Adams brothers; the developers of Brogue, Cogmind, Kingdom of Loathing, Caves of Qud, ADOM, and a whole lot more.
There’s about sixteen hours of footage there, and I haven’t watched it all–though from reports some of the talks might be interesting enough for me to do a specific post on them in the future.
What if a computer could predict the future? Show it a still photograph, and it’ll try to guess what follows? Admittedly, these hallucinatory videos show that the computer has a long ways to go before it can produce realistic results…but it does kind of work.
The general idea behind this research is to use adversarial training to train a neural net to recognize how a scene can change. There’s two networks here: a generator network that tries to assemble a video based on massive amounts of unlabeled video, and a discriminator network that tries to guess which videos are fake.
That’s the basics. The research (by Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba) involves a lot more details, like exactly how they built their model, or how it works even better if you explicitly separate the stationary background. But the most significant part of the research is probably that they got these results by using unlabeled video as the training source, enabling them to automate more effectively and acquire massive amounts of input data via Flickr.
I admit, though, that I’m mostly posting these for the images themselves, rather than their future potential. No matter how the tech develops in the future, the transitional stage is interesting in itself: I don’t know of any other method that produces video quite like this.
While an effective future-predicting algorithm has a lot of practical uses (such as frame interpolation for film editing) I’d be interested in seeing this hallucinatory style scaled up to production resolution.
Dreamlike imagery is quite hard to approximate: film VFX gets a lot of mileage out of fluid and smoke sims, and there’s some morphing techniques, but you’d really have to go back to optical processes to find an effect like this. Tarkovsky would have found it useful for Solaris or Stalker.
With ProcJam just around the corner, this year’s art packs have been released. Developed with funding by PROSECCO and Falmouth University, this year’s packs include hundreds of sprites by Tess Young and a huge collection of 3D items by Khalkeus. (Plus last year’s art pack by Marsh Davies.) They’re licensed CC BY-NC, so you can use them in non-ProcJam projects too.
I’m expecting lots of exciting projects this year. Go make something with them!
AI-written, human performed drama has been in ascent. The latest episode of the podcast Flash Forward adds another one to the list. Rose Eveleth asked Mike Rugnetta (host of Idea Channel) help create a script via neural net.
What’s particularly interesting about this neural-network-authored drama is that they take some time to analyze it afterwards.
There’s an element of pareidolia in this, of course, as the listeners find meaning in something written by a neural network that has little context for what it was creating. But, like many creative uses of neural networks, it is good at picking up on patterns. Even if we can’t say that it understands them, it does recognize them. There is a structure there, the structure created by the unconscious patterns of the human writers it learned from.
I think intentionally exploiting pareidolia for creative inspiration is one of the practical artistic applications of generative text and poetry. That’s one use for random-thing-generators: generate a cult, a city, or a dungeon and then tell your own story or draw your own art based on it.