AI in theHunter

This article by Karin Skoog about the animal AI in theHunter has been going around the procedural generation community, and for good reason: these kind of design considerations are important for all kinds of emergent gameplay and its very useful to see them talked about in the context of a release game that uses the AI as an essential part of the experience.

It’s also a good example of something I’ve been talking about a lot lately–the systems behind the generator have meaning and need structure. The AI systems covered in the article are pretty basic behavior trees…but they’re used as part of a deliberate design. Order gives meaning to the emergence. The difference between a forgettable generator and a memorable system is often just in the way its used and the ideas that it has embedded in its structure.

As Kate Compton has said, “It’s design” – we’ve got all these fancy tools, but often the real problem is just making things with them.

Take terrain generation, for example: throwing some Perlin noise or a fractal together is fairly easy: I can teach a novice to do it in an afternoon. Having that terrain mean something to the people who interact with it is another step altogether. How you do that is intimately dependent on what you want to say.

SimCity 2000 uses terrain generation to present its model of the importance of the physical space a city exists in: hills and rivers are important, demographics less so. Elite: Dangerous ties its terrain generation to a model of solar system formation. No Man’s Sky’s terrain expresses an aesthetic of  pulp sci-fi novel covers. Minecraft uses biomes and blocks for its two-way conversation with the player.

Even with that diversity, most of those are still pretty straightforward. What about using generated space to express gameplay relationships? Or making terrain generation part of the gameplay? Or making a whole game about exploring the possibility space of the terrain generator?

One of the things that I’m very interested in is how we can use these tools to express ideas in ways that wouldn’t be possible otherwise.

http://www.karineskoog.com/thehunter-call-of-the-wild-designing-believable-simulated-animal-ai/




Lillian Schwartz

I came across this article about Lilian Schwartz recently, which gives me a great springboard to talk about how the history of these things goes back a lot further than we sometimes assume. She was making films out of computer-generated imagery in 1976., and before that was making machine art.

Some of her projects would fit right in with a gallery of Processing projects, except she did it with machine language and FORTRAN.

There have been a lot of people experimenting with this stuff: Schwartz wasn’t the first. Some of them influenced other people who in turn influenced the works we see today, but the early history of art and computers is not very well known. Fortunately, there are people actively researching it–and some of the original artists are still around.




Nail Polish Bot

One thing that immediately sets Nail Polish Bot apart from other Twitter bots is that it uses 3D raytracing to generate its images. Which significantly ups the implementation complexity (it uses povray) but also expands the systemic complexity behind the image.

I’ve talked before about systems hidden behind surfaces, processes that generate outputs and so on. I’ve been circling around a theory of, let’s call it artistic information theory. Not as formal as the mathematical field, but it helps me think about generators.

We generally don’t know the inner workings of the generators other people make: the whole point is to look at the shadows they cast on the cave wall. But since the shadows repeat, we can start to detect patterns and infer what the machine looks like behind the screen. The generator’s surprise lasts until we figure out how the machine works.

A bot that generates images using a 3D raytracing process has a complicated machine to work with. In theory, a 2d process could generate amazing images: after all, a 3D renderer is just a really complex way of outputting 2d images. But when the generator is the one doing the bulk of the novel work, using a 3d raytracer gives it a lot more order and coherence.

Nail Polish Bot only changes the characteristics of the bottle. It could move the camera, change the scene, or adjust the lights but it doesn’t. The fact that it could transforms it into a minor but deliberate artistic choice.

(I don’t bother about intentionality most of the time when I’m talking about artistic choices: even if something happened by accident, the artist chose not to correct it. Consciously or unconsciously, every piece of art reflects those who made it.)

Future versions might make a different choice–that’s one of the exciting things about bots: they’re performance art by machines. You never know what the artist-machine centaur will do.

https://twitter.com/nailpolishbot

https://github.com/quephird/nail-polish-bot




Meow Generator

Pretty soon the internet won’t need humans anymore.*

Alexia Jolicoeur-Martineau has been playing around with using generative adversarial networks to create cat pictures. Which are pretty convincing. Those are definitely cats (and not lovecraftian ones, unlike some previous examples).

The source code is available if you’d like to try generating your own, but the part I found interesting was the write-up that compared different approaches. DCGAN has nice results at 64x64, with other methods varying in quality.

https://ajolicoeur.wordpress.com/cats/

*not even to generate yet another joke about cat pictures




Procedural Fireworks 2.0

I thought of an improvement to the fireworks that I made the other day, and I decided that the changes would be a good example of something I wanted to talk about: treating your generators as part of a larger process.

You see, the original fireworks were pure particles: each explosion, smoke puff, and trailing spark was a separate, simulated object. This kind of naive simulacrum is a pretty common approach, particularly when you’re feeling your way through the implementation and need more flexibility. But it isn’t always the best use of a generator.

In this case, I decided that all of those colored trails didn’t need to be individually tracked. They were adding hundreds of particles to each frame, but didn’t do much. So I added an image buffer. It’s just a hidden image the same size as the canvas that gets drawn to each frame by the falling sparks.

With no lifespan, I had to manage the fade-out another way: at the start of every frame, we keep the image from last time but draw a transparent black rectangle over the whole thing. Old colors gradually fade.

There are tradeoffs: the individual trails no longer have consistent color variation (though that could be re-implemented by pushing it up the hierarchy to the burst sparks. The lifespan fade-out is uniform, losing some of the subtle variation. There’s a usually-invisible after-image that doesn’t quite get cleaned up (though it easily could, with some more post-processing). The old particles no longer shrink (though I added a bright center to them, so the fade-out looks like they shrink a bit).

So it’s not an exact replica. It’s less flexible. But it looks almost the same and it’s faster. Now my computer can display many more fireworks without slowing down.

It’s better in some ways, worse in others. Every generator is going to involve trade-offs like this. And adding a new processing layer–the image buffer–opens up new doors for expanding the effects.

Which makes it a good example of one of the big points I’ve been trying to make. You don’t need to use the literal output of your algorithm.

Doing more processing on your output is a perfectly valid way to add more interest or find better performance–for example, by running the output of a neural net image stylization through an upscaler, so you only have to generate a quarter of the pixels that you would need to otherwise.

Generators can be literal, but they don’t have have to be.

https://www.openprocessing.org/sketch/437851




Thanks for posting about my translation of Genesis into words starting with A! If you’d like to play with the word2vec-derived tech behind it, I have written a program that makes rhyming word pairs on any topic or combination of topics, and put it up on the web. Here’s the URL to try it out, if you want: http://73.172.60.168:8080/rhyme


Thanks for sending this in!
I always like to inspect
The foundation of the ideation.
Sometimes it’s not
The results you’d expect
From the visualization.




The WhIM-project site appears to be down, leaving my blog post about it a little too high in the search rankings–I’d much rather the actual research project be there for people to find! The project ended in September of 2016, and it appears that the site didn’t outlive that by much.

I don’t have any links to the papers published about the research at hand, though a quick search turned up several:

Llano, M., Rose Hepworth, Simon Colton, John Charnley, and Jeremy Gow. “Automating fictional ideation using ConceptNet.” In Proceedings of the AISB14 symposium on computational creativity. 2014.

Llano, Maria Teresa, Rose Hepworth, Simon Colton, Jeremy Gow, John William Charnley, Nada Lavrac, Martin Znidarsic, Matic Perovsek, Mark Granroth-Wilding, and Stephen Clark. “Baseline Methods for Automated Fictional Ideation.” In ICCC, pp. 211-219. 2014.

Valitutti, Alessandro. “Creative Systems as Dynamical Systems.” In ICCBR (Workshops), pp. 146-150. 2015.

I’m sure that there’s lots more out there.

(I also came across this completely unrelated paper about generating poetry in Bengali, which appears to be interestingly different from English text generation:
Das, Amitava, and Björn Gambäck. “Poetic Machine: Computational Creativity for Automatic Poetry Generation in Bengali.” In ICCC, pp. 230-238. 2014.)

If this is the kind of thing that interests you, you may want to check out the work coming out of the International Conference on Computational Creativity. ICCC 2017 was just a few weeks ago.














Kirigami Beasts

People keep making amazing things. Tom Coxon is no exception: his latest project is collectable procedurally-generated papercraft monsters: find your own on the site.

There are 2^31 of them, so catching them all might take you a while. There’s also a twitter feed, for your daily collectable pocket lifeforms needs.

http://bytten-studio.com/kirigami/






Dinosaurs x Flowers

Chris Rodley’s dinosaurs-made-out-of-flowers have struck a nerve with people.

My personal interest stems from the images being a good example of the next phase of creative neural network art: it’s not just style-transfer that imitates Monet or Mondrian, it’s introducing another layer of meaning while maintaining image coherence. That opens up a lot of options for creative photomontages.

Another interesting aspect: the artist didn’t need to touch the code. He used deepart.io to do the style transfer. He could have done it himself, of course. The cutting edge style transfer code is still mostly in partially-documented GitHub repositories. But we’re far enough along that artists don’t need the cutting edge. The more mature tools are opening up the artistic options for people who are less technical but quite creative. I see that as a very positive sign.

https://chrisrodley.com/2017/06/19/dinosaur-flowers/








The PCG Assistant - Danesh 

Mike Cook’s procedural generation tool has been released!

It’s not a generator, it’s a tool for analyzing generators. Any generator: just set some annotations in the code and it can take the output and settings of any generator function and show you the expressive range. You can track how changes you make to the generator affect the metrics of the output and collect a sample of the expressive range across a large randomized range of input parameters. 

This kind of tool lets us move from feeling our way around individual settings for a generator and gives us a high-level picture of the possible outputs. Often, the problem when making a new generator is maximizing the surprises while minimizing the broken outputs.

For live generators, where we’re sailing in unknown waters without a good map of the possible outcomes, we tend to stick close to shore–only implementing the algorithms that we know will work. Danesh and tools like it give us a map and a compass that we can use to plot our way across the ocean.

https://www.assetstore.unity3d.com/en/#!/content/90364