Joy Exhibition (2015)

An art gallery of paintings made with procedurally generated guns and visited by futuristic beings: it could only have been made by Strangethink.

Another entry in Strangethink’s neon-CGA exploration beyond the boundaries of conventional content creation, only this time you are the artist and the machine procedurally generates the critics. 

Inversion and revisiting is one of the themes here: an art gallery, as in Secret Habitat but you are the one who made the paintings. A landscape, but you stay indoors. The art critics resemble the Opposite Puritans and Nebulous Friends of Perfect Glowing Bodies, but injected into three-dimensional forms.

There are also approximately a zillion possible procedurally-generated paintbrush guns, each with a distinctive spray pattern. My favorite one so far is one I’ve dubbed “The Sushi Gun”.

The other theme is one of communication: your only means of interacting with these mute alien art critics is via your paintings.

http://strangethink.itch.io/joy-exhibition







Châteauxuaetâhc (ProcJam 2015)

Architecture has rules. Anything built with rules can be procedurally generated. For ProcJam 2015, Chris Welch did exactly that. 

The choice of French Baroque chateaux, with their modular construction and geometric gardens worked out rather well. 

http://wanderingroad.itch.io/chteauxuaethc




Volumiscope (2015) 8k demo

I’m pretty sure I’ve written blog posts that have larger file sizes than this 8k demo.

KK/DMA took first place at Riverwash 2015 with this piece. I like the way it uses classical architecture as a basis–classical architecture has a lot of procedural rules, codified by Vitruvius and Alberti. While the historical buildings didn’t necessarily follow the theory precisely, having rules and procedures for creating something written down has kinship with the goals of procedural generation. Particularly when we think of generative systems as rhetoric, constructing meaning through construction.

Superimposed against this is an alien geometry, following very different rules. Its kaleidoscopic boolean geometry is very different from the humanistic architecture that surrounds it, but it is in motion, giving it a life that the stones lack. It reminds me of Roadside Picnic and La fièvre d'Urbicande in the way that a rigid, fluid alien intrusion is superimposed on a human-comprehensible (but no less mathematical or theory-bound) architecture.

(via https://www.youtube.com/watch?v=zAUvlJk5IxU)






X, a Game of Y Z (Procgen 2015)

Chess, in its long history, is a game that is both intensely naturally mathematical and subject to many variations. X, a game of Y Z takes that to its logical conclusion.

Chess is a fertile field for game designers. Ever since it was invented in (most probably) India, Chess has been changing. The basic structure of a grid of squares with pieces that moved in distinct patterns developed in many different ways. There’s the Chinese branch of the game, XiangqiShogiJanggi; Tamerlane Chess; Chu Shogi; Capablanca’s ChessJetan; Wildebeest Chess; and many more recent variations. Not to mention the variants the play with the board or other parts of the game, like Alice Chess or Kriegspiel Chess. There was even a Byzantine board variant that was structured as a loop, with the ends connected.

Fairy chess problems have a long tradition of using additional, invented pieces to complicate the problems. ChessVariants.com has a Piececlopedia of dozens of different pieces that have been used in Chess variants over the years.

Since fairy chess pieces mainly differ in the way that they move, X, a game of Y Z uses this to create infinite variations of Chess, inventing new pieces for each game. Many of which are very silly, or alternately mind-bendingly complicated as you try to work out the implications of how most of your pieces can only move diagonally.

The post-jam version of the game has a very basic AI, which won’t put up all that much of a fight but definitely knows how to use the Dapossu better than I do.

http://yanko.itch.io/x-a-game-of-y-z









Age of Empires II: Age of Kings

One of the things that made the Age of Empires series stand out against the other RTS games of the era was the procedurally generated maps.

Where many of its contemporaries came with a fixed set of skirmish maps to memorize, Age of Empires had map generators. The first game, with its expansion, had eight map types, while Age of Kings came with over a dozen map generators. Two dozen with The Conquerors expansion.

image

Since the maps were random, you couldn’t memorize where things were, transforming the chess-like opening books of a typical RTS to something that required a bit more adaptation and attention as the players scouted the map to discover the basic shape of the terrain. Learning where the resources are becomes as important as observing what your opponents are up to.

image

Age of Kings also opened up the map scripting system to modders, letting anyone code up their own random map in the somewhat idiosyncratic custom map generation script. Ensemble Studios released their own series of custom map generators online for free, giving players new content in one of the early effective uses of the Internet to enhance a game. Combined with the map editor, this gave players a huge amount of new content and helped cement its online community.

It’s worth considering releasing new generator as bonus content. While each individual generator probably took quite a bit of development time, on the development time to player content ratio it did pretty well. Plus, the new maps could afford to be more experimental or unbalanced, since anyone who used them was already looking for something different.




NetHack 3.6.0

Nethack hadn’t released a new version for 12 years, since 3.4.3 came out in 2003. That changed yesterday.

NetHack 3.6.0, finally released by the enigmatic DevTeam.

NetHack is one of the big five roguelikes from the Usenet era. It’s still one of the more significant roguelikes, one of the ones that comes to mind whenever anyone mentions roguelikes. A descendant of Hack, NetHack piled on everything, including the kitchen sink, in a dense array of interactions that makes it one of most idiosyncratic and expressive games around.

Yes, there is literally a kitchen sink in the game.

Version 3.6.0 is dedicated to Terry Pratchett. 

image





Sid Meier’s Civilization (1991)

The opening moves of a game of Civilization are a master class in making procedural generation matter to the player. While I love purely visual procedural effects too, it is much easier to make players care about procedural generation that has a direct effect on their interaction with the game. And with Civilization, every tile on the map has the potential to matter.

Later Civilization games would refine this formula and make the interface more accessible, but the basics were there right from the start. The beating heart of Civilization is the relationship between the cities and the tiles. Each tile has a measurable effect on the development of the nearby city, if one of the city’s workers use it…but the city only has a limited number of citizens,, particularly in the early game. Growing the best city means learning how to get the most out of each tile in the city’s radius. And, unless you’re playing on the Earth map, each and every one of those tiles was procedurally generated.

Since the basic unit of the generator and the gameplay is the tile, the expressivity of the underlying system is quickly absorbed by the player. It helps that geography is already familiar for many people, but no doubt these games are where many of us first learned about the strategic importances of isthmuses and how coastlines and continents can shape geopolitical choices. And it all goes back to way the tile is the shared language between the player and the game. (A couple of decades later, Minecraft would demonstrate the use of a universal building block to unify its systems in a similar way.)

image

This is, I suspect, one reason why the late game in every Civilization game has generally been weaker than the start: once you have the whole map opened up in the modern era, individual cities and tiles don’t matter as much, and the game never quite develops another way for the game and the player to communicate. 

The various Civilization games attempt to address this in various ways. The first Civilization did so by introducing the alternate victory condition of a space race to Alpha Centauri, which was not only pretty influential on other games, but also mitigates the late-game mop-up that typically plagues 4X games. 

As a bonus, the city view screens also have some randomized variation:

image
image












Elite: Dangerous - Stations

Something that’s a bit less noticeable than the procedurally generated planets is that Elite: Dangerous also has procedurally generated space stations. 

The big stations come in three or so varieties. The cuboctahedron-shaped Coriolis starports–which resemble the stations from the original Elite–tend to have very similar exteriors, except for the different lighting colors and the rare exterior arms.

More distinctive are the Orbis stations, cylindrical constructions arranged along a central axis. The stations are clearly built from modular parts and there’s a huge variety of different configurations. 

Unfortunately, most players aren’t going to notice this difference, because they mostly just interact with the business end of the stations, which pretty much all look exactly like this:

image

All that detail gets hidden because the major part the players interact with is mostly identical. Now, for all I know, there may be some variation that I’ve overlooked. (The motto painted above the docking slot does vary, for one.) But if there is, it doesn’t significantly affect the silhouette of the part of the station that the players spend the most time interacting with.

Human visual perception unconsciously responds differently to different cues. When we’re sensing differences or change, we look for color, motion, shape, and so on. In the case of these space stations, the shape of the docking area is similar enough that the different colors of lights don’t make a huge difference. Humans are relatively good at seeing familiar objects as the same under different lighting conditions, after all.

That’s one reason why silhouettes are important in animation and drawing: you want to ensure that a character and pose is recognizable in silhouette to make the visual perception work with you rather than against you.

image

This goes double for the interiors, which are nearly all very similar, I suspect because it’s such a big part of the game’s interaction that it makes it tricky to introduce too much variety without a lot of extra testing. As it stands, I’ve never seen any bugs in the core docking mechanic, and I imagine that you’d need to put in a lot of work to guarantee that across the board with more varied interiors.

image

There is one kind of interior that is very different: some stations have a pink-tinged luxury docking bays, complete with palm trees and garden skylights. There’s not very many of them, but they do stand out and help give a bit more life to the galaxy.

image

The other major factor is that the different stations don’t have very many functional distinctions, or at least not ones that are frequently visibly apparent. The game systems track which faction controls the station, what goods the station produces and buys, if there’s a civil war or other crisis going on, and if the station has a black market, but most of these factors don’t have an obvious visual cue. 

It is wrong to say that gameplay and systems are the only thing that matters in a game: after all, you can only experience the game systems through the mediation of the interface, narrative, and visual elements. But that doesn’t mean that the systems don’t matter. Here, the feel of the stations is very similar. While encountering the occasional palm-decked luxury station is special, the (thankfully fairly low friction) docking experiences are largely identical.

image

The smaller outpost stations also come in many modular configurations, though again, the functional distinction isn’t significant. Here, though, instead of being too much order, the outposts have too little: with no clear silhouette, they come across as random conglomerations of parts, which the players don’t have a huge incentive to explore. Too much chaos, not quite enough hidden detail.

image

I should note that it appears that Elite’s developers are in this for sustainable development over the long haul, which I find to be an exciting approach to game development simply because gradual development aimed at fulfilling a vision for a game over time is, to my mind, a better model for both the developers–who get steady work on a good project–and the players–who get more investment into a game that they already enjoy. 

So, while I have things to criticise about the state of the game’s procedurally generated stations, I’m not going to say that the team made the wrong call. Sometimes you have to make the hard decision to cut back on a feature. While I’d love it if this aspect of the procedural generation was more deeply integrated into the game’s interactions, it’s not important enough to hold back the entire game. 

That’s a message I’d like all of you aspiring procedural generation creators to internalize: sometimes, procedural generation isn’t the most important thing. I have no shortage of enthusiasm for procedural generation, but I also enjoy things that aren’t procedurally generated at all. I think that’s key for any artist who hopes to master a tool: sometimes the hardest part to learn is when not to use it.

Though I do hope they develop the stations further in future updates.




Procedurally generating a narrative in Forest of Sleep

As a rule of thumb, I don’t like to talk as much about unreleased projects, since there’s no shortage of released procedural generation projects and I don’t have as much to say when I can’t experience the work personally. (No Man’s Sky notwithstanding.) Plus, I prefer to take my own screenshots if possible.

The big exception is where people write about how they’re approaching procedural generation. Such as this article about the storytelling generation in the upcoming Forest of Sleep, from Ed Key (of Proteus) and Nicolai Troshinsky. What interests me here, as someone who has just spent a lot of November working on a novel generator, is the specific ways they’re creating the context for their stories. 

They’re using visual elements to imply rather than explicitly state parts of the narrative. Taking advantage of lacuna to invoke the player’s pareidolia can sometimes be simpler with images and animation, since visual grammars are looser than grammar in written language. 

But the key here seems to be the reincorporation. The system can’t understand everything with the sophistication of a human storyteller: but if it can remember the elements that it is capable of tracking and deliberately invokes them again, it can make the most of what it has. 

When I talk about how design approaches are sometimes more useful than AI approaches, this is the kind of thing I mean. You don’t need a magically intelligent storytelling AI to get a better result. While we’d all like better algorithms, to tell a procedurally generated story you really just need a smarter way of using the algorithms we already have. Many of the most successful NaNoGenMo entries have come from taking existing algorithms and either finding new ways to combine them or clever ways to justify their output and give them context and framing.

http://www.gamasutra.com/view/news/259455/Procedurally_generating_a_narrative_in_Forest_of_Sleep.php




Nick Montfort’s 1K Story Generators (2008)

Now that NaNoGenMo is done, let’s jump back in time a bit to talk about some pre-NaNoGenMo story generators. 

Nick Montfort has been talking about interactive fiction and creative computing for quite some time. (He’s currently an associate professor of digital media at MIT.) In 2008, he wrote a tiny little story generator that worked by elision: taking a list of sentences, removing some of them, and presenting the remaining ones. He followed it up with a couple of others that used similar processes.

Here’s one output from story3.py (with sentences written by Beth Cardier):

Here’s the story:
It’s Tommy’s birthday.
Mother steps away to answer the phone.
The dog leaps onto the counter.
Rufus grabs the prize.
Tommy chases Rufus.
A truck hums along.
Rufus wags his tail.
A lawnmower sputters.
Tommy runs outside holding Rufus.
A cloud darkens the sky.
The end.

I tend to think that the elision (and addition, in this one) is a surprisingly effective way to take advantage of the human tendency for pareidolia and apophenia. It’s not computationally complicated (you could implement these generators with playing cards) but it doesn’t have to be.

It also gives me an opportunity to talk about the distinction between one instance of the story, and all the possible stories that the generator can produce. I get one sense for the first story I read from one of these generators, and a somewhat different sense once I’ve read enough (or understood the source code) to have some idea about the entire story space. 

What it comes down to, I think, is whether you consider what a story says to be true for all of the stories, or only for the particular story that you are reading. Here’s another story from the same generator:

Here’s the story:
It’s Tommy’s birthday.
Tommy loves salmon.
Mother screams.
The sky is clear and blue.
Tommy cries.
Mother kicks Rufus.
A bicycle bell rings.
Tommy runs outside holding Rufus.
The newspaper slaps the front door.
The end.

This time, Rufus is innocent. (Indeed, we only know that Rufus is a dog if we read the first story.) Tommy’s life is a lot darker, here. 

So, are these separate stories, or two versions of the same story? Do we think of them as different views of the same events, or totally unconnected stories? Or both at the same time? Is Rufus always a dog? Are we reading the individual story, or the metastory that’s made up of all of the possible stories? I don’t know that there needs to be one set answer for these questions, but I also don’t have a good way to distinguish them yet. 

Rhizomic, labyrinthine works are intrinsic to the cybertextual form, but I don’t feel like the popular understanding of digital storytelling has a thorough understanding of them. I know I’d like to understand it better myself.

(If you’re interested in these questions, you might want to consider Sam Barlow’s Aisle as another example of a rhizomic story. Though there are many, many other precedents.)

The other reason I like these generators is that they’re simple enough to make that writers who don’t know how to program can still create their own. Take a stack of index card, number them, and write down a sentence on each one. Now, shuffle the cards, remove a bunch, and then put the remaining ones down in order. Instant non-digital story generator, with lots of room to experiment.

If you’re interested in playing with procedurally generated storytelling, this is an excellent place to begin. Making one as an exercise will teach you a lot about some of the kinds of writing that that works for procedural generation.

Meanwhile, I’ll leave you with these stories from story2.py:

Here’s the story:
The police officer nears the alleged perpetrator.
He hugs her.
Each one learns something.
The end.

Here’s the story:
The babysitter approaches the child.
She defies him.
They feel better after a good cry.
The end.

https://grandtextauto.soe.ucsc.edu/2008/11/30/three-1k-story-generators/