Eyeo 2017 - Gene Kogan
How to not make money with machine learning

There is a lot of really exciting artistic work being done with neural networks. I’m more excited by the weird stuff that people are making with them than the supposed industrial applications. Gene Kogan is one of the people who are pushing it into new areas. This talk touches on a lot of stuff he’s been working on. 

He does a good job of both pushing the techniques past the basic stuff and explaining how the things work.

Watching this inspires me to get back to experimenting with deep dream artwork. (Though don’t expect anything anytime soon!) But if you’ve got something that you have been working on, I’d love to see it.




Let’s Program a Banjo Grammar

This is a fun video, not just for the banjo results but also for the thinking processes. Ryan Herr uses Tracery to operationalize the style of an expert banjo player. In the process, he touches on context free grammars, using computing to express knowledge, and making music with Tracery.




Gearhead Caramel’s Story Generator

I think story generators in the Gearhead series are currently the most complete story generator that is least talked about. While there are many exciting new approaches that people are trying, Joseph Hewett has had working procedurally generated plots in his games for well over a decade.

He’s recently written a blogpost about the story generator in the Winter Mocha release of Gearhead Caramel. It’s a good summary of his approach and how the current and previous Gearhead story systems work.

http://www.gearheadrpg.com/2018/02/26/winter-mocha-random-story-generator-propps-ratchet/



I started grad school, which is debatably something like being alive.

I’m researching procedural generation (of course) so there’s quite a lot of new things that I can talk about, provided I have the time to write about them.

There are also many lovely submissions and questions that people have sent me that I am slowly sorting my way through. Tumblr, annoyingly, doesn’t give me a way to schedule responses, so it’s tricky to fit into my schedule. Though, perhaps not coincidentally, one of the themes of my research has been that generative tools are one way to empower users to take control of their relationship with computers.

Along those lines, I’ve created a backup version of this blog on my own server, at http://procedural-generation.isaackarth.com. It uses a static site generator to create a flat HTML version of the blog. It’s not the most generative project, but I think it’s a small example of how having control over your own technical tools can empower you, and I’d like to help make that available for more people.




Flow fields on necessary-disorder.tumblr.com

I like this effect. A lot.

It’s also a good demonstration how you can have a higher-order structured process driving an output. It’s obvious that there are rules that govern how the particles are moving, but it is not a direct mapping like you see in a terrain heightfield that directly uses Perlin noise.

Etienne has a tutorial for how to make this effect, if you’re interested in building on it yourself.






Fantasy Maps for Fun and Glory

One of the nice things about writing about this stuff for a while is getting to check back in on projects and seeing how they’ve progressed.

Like this fantasy map generator, which I posted about nearly a year ago and has made some big strides since. The creator has a blog, talking about the ongoing development.

One of the things being added is the ability to edit the map elements, such as this river editor. Right now it’s a visual change, but I’m very interested in editors that loop back around into the generator and change in response to user input. It’s a really powerful way to collaborate with the machine.

As you can imagine, I’m planning on checking back in on this project again. I’m looking forward to finding out where it goes.

https://azgaar.wordpress.com/

http://bl.ocks.org/Azgaar/b845ce22ea68090d43a4ecfb914f51bd




A Pig, an Angel and a Cactus Walk Into a Blender: A Descriptive Approach to Visual Blending

OK, the first thing I noticed here were the pictures. It’s not every day you see a cactus pig.

This paper, by João M. Cunha, João Gonçalves, Pedro Martins, Penousal Machado, Amílcar Cardoso, presents a visual blending system that uses a structured approach to blend little sketches together. Because it is aware of the structure and relationships of the component parts of the sketch, it can find analogies between different sketches.

Generating new things by combining other things is a tried-and-true procgen method, but the novel thing here is that not only is the computer figuring out how to combine the parts, but that the results are combined in ways that feel fresh. The various pig-angels aren’t as obviously kitbashed as if they’d been modularized by hand. They have a cohesion. I’d be interested to see how widely this works with other sketched objects.

Not to mention that I can see a great application for this and related techniques being applied to some other domains, like generating parts for a level designer to use.

https://arxiv.org/abs/1706.09076
https://arxiv.org/pdf/1706.09076.pdf




Level Generation in Ruggnar

The level generator that Cyrille Bonard implemented for the currently-in-development candle-powered platformer game Ruggnar was inspired by Spelunky but takes things in a different direction.

Of particular interest to me at the moment is that it uses a post-processing step to remove disconnected parts of the level. Spelunky’s generator doesn’t need this because the generator is specifically designed to never create a level that breaks the “golden path”. Making a generator that never produces a failed output can be a powerful design goal for a generator, but it isn’t the only way.

Ruggnar’s post-processing steps allow it to make maps that are more geometrically complicated than Spelunky, with non-rectangular maps. This also lets the earlier steps have a wider range: the connections between level sections can be much more varied than Spelunky’s carefully-designed match-ups.

Introducing a post-processing step can, if used right, enable a generator to do more than if it had to get everything right the first time.

https://ruggnar.com/ProcGen/




Generated Poetry: X except its Y

Poetry generated using X except its Y on permutations of the lyrics from the first 4 Nine Inch Nails albums.

It was part of enkiv2′s National Poetry Generation Month work for 2017. It uses word2vec to combine a source text with a trained style. Quite effective.

Though I think that in the most striking imagery it’s a bit prone to plagiarism, as a side effect of how it rotates through the combinations. As you can see, some of the lines are the exact words in the original lyrics.

Plagiarism, in the generative sense, is what happens when an algorithm trained on input data, such as a Markov chain, outputs verbatim from the source data.

Markov chains with too little data or too high an order are particularly prone to this problem. But other algorithms are as well; it’s particularly tricky with many kinds of machine learning, and it’s one reason why it is important to keep the training and validation sets separate from the testing set.

These X except its Y results aren’t quite the same as generative plagiarism in that sense. Indeed, the effect of combining the two texts is part of the point, I think. But the concept comes up a lot and it’s worth critiquing results with it in mind.

Does the reoccurrence of a familiar line sufficiently counterbalance the way it shows the limitations of the generation? For me, I think it tends to highlight how the original uses repetition to create a resonance in its structure that the generated poetry is unaware of. But part of generative poetry’s draw is exactly how it can take the familiar and recontextualize it.

How do the dissonantly different word choices change the effect of the lyrics?

https://github.com/enkiv2/misc/tree/master/napogenmo2017




Manuel Barbadillo

This computer-generated art thing is new, but maybe not as new as you think.

Manuel Barbadillo was an artist from Spain who was influenced by reading Norman Wiener’s work on cybernetics and applied those ideas to his art, which evolved into a modular system of shapes. He used the computer to generate combinations of these and search for ideas. Working from grids of asterisks that depicted the shapes the computer generated, he painted the final version by hand.

This kind of collaborative human-computer process is, I think, an important thing to keep in mind. Computers can only use metrics they can quantify, even if that quantification is too complex for humans to understand. So I think that human-centric art is always going to involve some degree of mixed-initiative human involvement. And I think building tools that make it easy for the human to converse with the human about the design is one of the important challenges facing us today.

There are many pioneers of generative things. Some I know about, like Frieder Nake, Vera Molnár, and Lillian Schwartz, but I’m still finding lots more I haven’t heard about before.

http://dada.compart-bremen.de/item/agent/229

“My Way to Cybernetics”: http://www.atariarchives.org/artist/sec13.php