ProcJam Talks 2016

This year’s ProcJam talks are in advance of ProcJam itself. Specifically, they’re happening today. Eight talks by experts, some of whom I’ve talked about here: Gabriella Barros, Joris DormansBecky Lavender, Mark Nelson, Emily ShortTanya Short, Adam Summerville, and Jamie Woodcock.

You can watch the talks live on YouTube, and after they’re done you’ll find the video at the same link.

There’s more information about ProcJam at the official site. I’m looking forward to seeing what people make this year!




PCG-Based Game Design Patterns

Games that use procedural content generation usually hide the generation from the players. Once the world is created you can’t interact with the generator.

In this research paper, a list of luminaries—Michael Cook, Mirjam Eladhari, Andy Nealen, Mike Treanor, Eddy Boxerman, Alex Jaffe, Paul Sottosanti, and Steve Swink—talk about designing games that are based on the users interacting with the generators.

(Speaking of Mike Cook, this is a good time to remind everyone that the talks for ProcJam are tomorrow.)

Some games let you interact with the generator at the start of the game—Civilization’s map settings, Minecraft’s seeds, or even the desks in Magic: The Gathering or set of available cards in Dominions. Some even let you add new patterns, such as the editor for Race the Sun. But it’s rarer for games to let you interact with the generator while you play the game: games like Warning Forever and Galactic Arms Race are not the norm.

The taxonomies for interaction and content laid out in the paper are a good starting point for thinking about some of the things you can do with a generator. Some of these are areas I think there’s a lot of potential to innovate. I’m looking forward to seeing what narrative generators come out of this years NaNoGenMo, for example. 

I also think that generative progression systems are underexplored, despite roguelikes and action roguelikes (such as Diablo) using them. There’s a lot of possibilities to explore here, beyond loot and weapon generation. 

Imagine a progression system generator that creates different kinds of gameplay based on your choices. Instead of hand-authoring branching content, the generator can create it on the fly. This shades into the level creation in Lenna’s Inception or Becky Lavender’s Zelda Dungeon Generator, which combine environment generation with an elaborate system to create the proper constraints for the player’s choices.

The design patterns themselves—the AI as creative proxy, meta-environment, the player as a filter or fitness function; and games where the player can interact with the generative system itself—are all rich potential veins of ideas.

The paper wraps up with two game designs that use these ideas. Sliding Doors is a Telltale-style adventure game design that uses the user’s choices to prune the future story tree. Tombs of Tomeria is a dungeon exploration game where the player can influence and reshape the cellular automata that generate the map.

There’s lots of ideas to dig into here. I’m likely going to reference this a lot as I think about my upcoming projects. If you’d like to read the paper for yourself, you can find it here: https://arxiv.org/pdf/1610.03138v1.pdf




fyprocessing:

(via Skyline on Vimeo)

from Raven Kwok

Skyline is a code-based generative music video I directed and programmed for the track Skyline (itunes.apple.com/us/album/skyline-single/id1039135793) by Karma Fields (soundcloud.com/karmafields). The entire music video consists of multiple stages that are programmed and generated using Processing.

One of the core principles for generating the visual patterns in Skyline is Voronoi tessellation. This geometric model dates back to 1644 in René Descartes’s vortex theory of planetary motion, and has been widely used by computational artists, for example, Robert Hodgin (vimeo.com/207637), Frederik Vanhoutte (vimeo.com/86820638), Diana Lange (flickr.com/photos/dianalange/sets/72157629453008849/), Jon McCormack (jonmccormack.info/~jonmc/sa/artworks/voronoi-wall/), etc.

In Skyline’s systems, seeds for generating the diagram are sorted into various types of agents following certain behaviors and appearance transformations. They are driven by either the song’s audio spectrum with different customized layouts, or animated sequence of the vocalist, collectively forming a complex and organic outcome.

Behind the Scenes on Skyline’s generative music video

A look at the process behind the processes of one of Raven Kwok’s generative music videos.

I’m fond of generative music visualization. It’s a kind of code-based synthesia, reacting to the sound and translating it into another form.

Voronoi tessellation comes up a lot, too. It’s one of the basic building blocks that you can use to build more complex structures.






Visualizing Procgen Text

Emily Short, dame of interactive fiction, has been blogging about the design process behind The Mary Jane of Tomorrow, specifically the visualization of the procedurally-generated text.

While visualization is very useful in development, I’d argue that it is especially important for generative projects. If you’re trying to design a system that will surprise you, then being able to hold it all in your head and imagine all of the possible outcomes isn’t necessarily a benefit. Tests, visualization, and other tools can help build more complex systems.

In two articles, Emily Short looks at figuring out what to visualize and how to display it. It’s always useful to read someone else’s development experience, particularly when the writer is experienced at authoring interactive text. To pluck out one example, she talks about some kinds of additions that make the reader’s experience worse. That’s a lesson many of us had to learn the hard way.

She has another recent post that’s also worth mentioning, discussing how to deal with the bowls-of-oatmeal problem. Connecting the generator to something mechanical; the creation of low-level, layered feedback; and the use of procedural text to respond expressively to the context and relationships where it’s used. 

Most importantly, she points out that procedural generation “isn’t a substitute for designing content. It’s a way of designing content.” 

If there’s one idea that I can get across with this blog, it’s that one. Procedural generation gives us a new, expressive way to communicate ideas. It, as she says, demands that you be able to express your aesthetic goals in terms of the system.

The art tools of the future won’t be robotic replacements for artists. They will open up new ways to express ourselves and new mediums to create in. They will require collaboration with our machines while always striving to look beyond its naive limitations. 




Quadrupet 

Made by lizzywanders and Ted Martens for the Pippin Barr “GAME IDEA” jam, Quadrupet is a fun little generator that creates zillions of cute critters. Press space when you see one that you like.

I like this little purple guy:

https://lizzywanders.itch.io/quadrupet








The Algorithmic Beauty of Plants

This one is for Video Game Foliage, who encouraged me to start this blog in the first place.

We’ve talked about L-Systems before: grammars of rules that can be used to, among other things, describe plants. In The Algorithmic Beauty of Plants, Przemyslaw Prusinkiewicz and Aristid Lindenmayer describe computer modeling of plants, with detailed looks at simulating different parts of plants. Lindenmayer systems feature heavily, of course.

The book is available online. Though it was originally written in 1990 and there are new techniques that have been developed since, it’s still a really good introductory text to the botany and simulation you’ll want to think about when making digital plants. 




maxorder-flowy:

Beethoven’s Ode to Joy in Metal style by Artificial Intelligence and SONY CSL artist in residence Benoît Carré

On the other hand, you may prefer the Flow Machine’s metal version of the Ode to Joy…




maxorder-flowy:

AI composed music in the style of Michel Legrand

I rather like this machine-composed track by Benoît Carré and FlowComposer…




Mr Shadow: a song composed by Artificial Intelligence

Sony has been funding some interesting AI research into music composition, with Flow Machines.

I find it a bit less impressive than some other music composition approaches, since the software only handles the basic composition, with the arrangement and performance handled by the human musician. You’ll have to look elsewhere for full-stack machine music. Still, that means that it’s already well-suited to human-machine collaboration. These are the kinds of tools that will drive the future.

There is, of course, a Soundcloud account for the Flow Machines. And the research website has a ton of examples, including showing off its flexibility in variationsre-harmonization, and different instruments. (I like this one.)




How to Draw With Code

Casey Reas is one of the creators of Processing. In this video, he talks about using software as his artistic medium. 

He talks about creating images with processes, emergence, and his artistic process. He compares creating his software art to being a composer, writing a score for the machine to follow. He also talks about Processing, the programming language that has become a powerful generative tool.

(via https://www.youtube.com/watch?v=_8DMEHxOLQE)