On Endings, Pacing, and Length (and No Man’s Sky)

Games don’t end when the game says “The End”. Games end when the player stops.

Granted, the two often happen at the same point in time, but I’d venture to say that for the majority of games that isn’t the case. 

Sometimes players get bored or distracted and stop before the formal end. That doesn’t mean that it’s a bad game: I’ve never actually finished Skyrim’s main quest, but I’ve spent a lot of time in it. 

On the flipside, some games you finish playing and then immediately jump back in and play again. Competitive games, like Chess or Team Fortress 2 are obviously structured this way, but it can apply to any game you enjoy enough to play again. The existence of things like New Game Plus is merely a formal, mechanical recognition that we often play games we like multiple times.

In procedurally generated games “the end” is often even less meaningful than usual. (Minecraft literally has a place called “The End” which doesn’t actually end very much.) But that doesn’t mean they don’t have endings, it just means that the player decides what the ending is.

image

If this sounds like a harder thing to design, that’s because it is.

Alexis Kennedy’s discussion of endings is relevant here. How Fallen London handles endings is instructive, particularly the problem of giving closure in a story game that’s so far continued indefinitely. 

Fixed dramatic forms have the advantage of spending a few thousand years learning how to generate catharsis and closure. There’s a lot we can borrow from them, particularly the less-fixed forms such as theater, as Kentucky Route Zero does so well. But there’s still a lot of possibilities left to uncover.

image

No Man’s Sky, of course, has the center of the galaxy as it’s ostensible goal. But, like most procedurally generated gameworlds of indefinite size, the real constraints come from the player. No Man’s Sky is big, but it will only last as long as you keep wanting to play it.

No Man’s Sky doesn’t have 18-quintillion planets. It has however many planets you end up visiting before you decide to stop.

The other planets still matter, since the game itself will never tell you to stop. You can keep on going for the rest of your life, if you like. But, as Borges described, the effect of contemplating the Library of Babel is akin to meditating on infinity. Read the generated Library of Babel long enough, and you start to experience some of the emotions described in the short story.

image

Pacing

Pacing in film is dictated by the tempo of the shots and edits. Pacing, in a game, deals with the timing of interactions and discovery of new content to dictate the heartbeat of the player’s engagement. 

But how do you design the pacing for a game that’s longer than the player’s lifespan? 

Most current videogames consist of an emergent system with a progression on top of it, though other nestings of the two aspects are possible. SimCity is a good example: there’s the emergent simulation of the city, the progression of new things to build (gated by money or population), and the emergent problems that occur as your city expands and your original traffic plans no longer work in a larger metropolis.

No Man’s Sky has the progression of unlocking new blueprints, the Atlas and getting to the center, getting a bigger ship, and so on, but that’s acting in parallel to the emergent discovery of the next new surprising planet. This makes designing the pacing a weird, dynamic problem, more so than for other kinds of games.

Pacing both halves of No Man’s Sky is tricky. I suspect that most players are going to stop when they feel like they’ve seen enough, rather than when they reach the end of the progression paths. That’s not much of a leap, though: for a given game, only a few will ever complete it. The idea of beating a game may be embedded in gamer culture, but I believe it was always a myth, in both senses. Beating a game is a story we tell ourselves about the kind of gamer we think we’re supposed to be.

image

But it’s also because the two halves of No Man’s Sky are so disconnected. You won’t run out of planets, but you’ll start seeing already-known blueprints pretty quickly.

If there is more variation in the buildings or the items deeper into the game than I’ve yet gone, I suspect most people will stop long before they discover them. Pacing the static progression content is hard, because it’ll never match the procedural generation. Either you make it fast to keep a reasonable rate of reward (and walking across a planet is already at a contemplative pace) or you make it much slower to stretch it out.

image

I wish they would have included some parallel procedurally generated content systems, like the weapon generation in Borderlands or Galactic Arms Race. That would have given another dimension to the exploration: repetitious actions are more forgivable in the context of random rewards.

But of course that would make the overall balance even tricker than now, with some people having amazing games, others frustrating, and others boring, in unpredictable ways. Not to mention the extra development time, or the risk for procedurally generated loot to spit out dominant choices that eliminate whole swaths of gameplay. It’s not something you can just drop in on a weekend.

image

If nothing else, No Man’s Sky has given me a lot to contemplate. As Alexis Kennedy said, players want closure and continuity. How do you do that in a game that has no defined end? 

As for the pacing, try it as a design exercise: how would you deal with pacing the content? And do it with the same or fewer development resources: no magic content fairy for you.

Even for linear, narrative-progression-driven games, this complexity is worth keeping in mind. Every game is larger than its progression system.

image



No Man’s Sky

The game is finally out, so what do I think of it?

image

Unfortunately, I haven’t settled on an easy answer yet. 

image

This is partially because the game is really big. Maybe too big. I’ve barely scratched the surface (I’ve yet to find my first Atlas pass) and I’ve seen some crazy different planets…and some gameplay features that are the same across every planet. Will they vary more as I get deeper into the game? I can’t say yet. But I’m pretty sure that some of it won’t change.

image

For example, while the possible planetary biomes are quite varied, each planet is limited to a single biome. This was a deliberate design decision. The intent is that you’ll keep exploring:

“And we’ve built this whole huge universe, and that would be a shame! We want them to go out and explore. Or, for instance, each planet could contain loads of different biomes within it, we could have polar ice caps, and all that kind of thing, but then it wouldn’t make you want to go and visit other planets. So we don’t have that, and that’s really purposeful, and that’s our kind of vision for the game.”

I have to respect that, even if I’m not sure I agree with it.

image

Every planet I’ve found so far has had something unique about it. From my first, acid-rain-and-mushroom planet, to a snow world with red cliffs, to a planet where vast tables of rock float above an alien jungle, the visual variety has kept up.

This is a game that wants you to keep moving and seeing new things. The game is oriented around going forward. There’s a few features, like the bonus for analyzing 100% of the creatures on a planet, that act as friction on that. 

image

Sure, some of the planets do feel a bit similar. A lot similar in the case of the more barren worlds. It’s much more pleasant to stroll through a lush paradise instead of scrabbling across empty rock gullys.

And, at least so far, the buildings and other places you visit are often visually identical. They’re certainly functionally identical, in that the points of interest fall into a few general categories and I’ve yet to find a huge emergent contrast that doesn’t derive from the planet’s security/conditions/wildlife settings.

I’ve also yet to encounter any vast plains or forbidding mountains. While there’s been a lot of variety so far, it has had some limits. Most places I’ve seen haven’t pushed the extremes as far as the generator is almost certainly capable of doing.

Though the planet with giant cubes of rock looming above snow-covered valleys was definitely memorable. 

image

And I’m going to have to go back and find it again if I want any pictures, because my screenshots from that session didn’t get saved and who knows when I’ll see a planet like that again.

(The game is definitely big. You’d think that a Douglas Adams quote would get the scale across, but that’s just peanuts to the math. Math can get really big. And procedural generation is one of the ways we have to sensorily experience mathematics.) 

image

I have a lot more to say about No Man’s Sky, later. Good and bad. I expected that it would be instructive, and it definitely lived up to my expectations on that count.

What I can say today is that it is not the pinnacle of procedural generation. It’s an example that’s going to be referenced repeatedly, especially if Hello Games continues to support and update it. But, inevitably, there is also a vast amount of things that it’s not doing. 

Sometimes that’s a deliberate design choice, like with the single-biome planets. Sometimes that’s because its focus is elsewhere–there’s no procedural plot generation, for example. (You’ll have to wait until November for the state of the art on that.) It doesn’t procedurally generate new gameplay. (You’ll want ANGELINA for that.) And sometimes No Man’s Sky tries but the implementation fails to live up to the rest of the game.

image

Still, I’m glad it got made. I think it closes out an era of discussing if an accessible, modern universe-exploration sim can be made and marks a transition to the dialogue about how best to make one. 

I can live with Noctislikes becoming a genre.




(via https://www.youtube.com/watch?v=Xx7-o869wek)

CPU Bach

Interactive generative music software probably isn’t the first thing that comes to mind when someone says “Sid Meier”, but it’s not as outre if you consider that he started making flight simulators and did pirate games, spy sims, and Railroad Tycoon before Civilization came along. Still, CPU Bach is probably the oddest entry in his oeuvre. Jeff Briggs, involved as always in Microprose’s music, also worked on the design and programming.

The patented system for generating music based on the patterns in Bach’s music is certainly an interesting project. Since it was only released on the 3DO, it never had wide exposure. 

While limited by the technology available at the time–electronic music production has evolved past wavetable MIDI–the compositions themselves certainly sound Bach-like. And, based on the recordings I’ve seen, it does a good job on what I’ve always considered the hardest part of generative music: giving the composition an end that sounds right.




Generating Fantasy Maps 

Martin O'Leary continues his explanation of Uncharted Atlas with a look behind the scenes of the map generation.

It’s inspired by Amit Patel’s polygonal map generation, but it’s got its own spin on what to do with the results. With interactive in-browser examples of how the algorithm works.

I hope to see future NaNoGenMo and ProcJam projects inspired by this in turn. The source code is available on GitHub, if you’d like to get an under-the-hood look at how it does its thing, but Martin’s explanations are pretty comprehensive even without that.

image
image
image

http://mewo2.com/notes/terrain/






Smile Vector

If you thought that the ability to algorithmically change which way people are looking in photographs was a bit disturbing, you should know that’s not even close to the limit of possibilities.

@smilevector is a bot created by Tom White that uses a neural network to add or remove smiles from photos. This kind of neural puppetry, capable of generating faces along multiple vectors, is only going to become more powerful in the future.

Images are so central to our present culture that the idea of them being this easily manipulated is a bit off-putting. Not that photographs were true to begin with: the camera always lies. The image captured is not the thing itself, and context, framing, focal length, filters, and a host of other choices can alter the message before the photons hit the film, let alone the darkroom manipulations afterwards.

In a sense, image processing like this is more honest. In this age of Photoshop, images still cling to the illusion of truth. Maybe acknowledging how much of our recorded reality is a fiction is healthier than trying to pretend that the camera has access to Truth.

After all, there are already consumer cameras and apps that detect when the subjects are smiling. And it’s not hard to combine several shots where different people are smiling to get one smiling group shot. From there, it’s just a little ways across the line to have cameras that always take pictures of people smiling, no matter what their expression actually was.

Just imagine it: 21st century family photo albums, where everyone has a perfect smile in every picture.

image



Order, Chaos, and Scale in Elite Dangerous

The thing I like about the skybox in Elite Dangerous is that it’s created dynamically. Every point of light you can see is a star that you can visit. That scope requires the kind of flexibility that you can pretty much only get with procedural generation, even if Skyrim does try.

The same thing is true in the opposite direction, now that Horizons is out: look down at that rocky planet below, and every crater is a place you can visit up close.

image

Since that’s far too much data to store, this requires that the generator be predictable and deterministic, at least for the planet surfaces and star characteristics. (I have no idea if the Stellar Forge is deterministic, since they could get away with caching less than 0.01% of the systems, which is all players have explored so far.) 

Continuing one of my themes, this requires order.

The two Frontier games handled the generation of the galaxy by using a bitmap of star density across the galaxy to define how many stars should be in each sector. Elite Dangerous uses a similar technique for the density of the galaxy, only extended into three dimensions. Only instead of just tracking density, it also tracks some of the other characteristics of stars, like metallicity. With a catalog of 160,000 known objects overlaid on top, there’s a heck of a lot of stars to explore.

image

It’s not perfect: astronomical measuring uncertainty has left some anomalies in the data. There’s some dense lines of stars that are artifacts from deep sky scans (such as a dense line of stars near NGC 1333, via the 2MASS data). But in general, if you see a star from Earth, you can visit in the game, which is a really cool property all by itself.

The sheer scale of the generation introduces another imbalance, one that is less noticeable in smaller generators: there are so many stars that individual stars matter less and regions of stars matter more. At this scale, you need biomes and landmarks to make sense of things. 

Minecraft has the same issue if you travel far enough, but its block-scale interaction means that the basic language unit is granular enough that you won’t usually explore enough to see the scale at which the map blurs into noise. In contrast, in Elite it’s not hard to scroll the map to a region filled with unexplored, undifferentiated stars. And the basic unit of interaction for exploration, before planet-landing was introduced, was planetary bodies. 

image

With planet-sized interaction units, there isn’t as much scope for contrast to develop. There are patterns to learn, like how to find planets in the goldilocks zone, or which planets are worth more to scan, but there aren’t a lot of treasure maps for the stars themselves. As it turns out, what you really need are galaxy-scale biomes to give the terrain some meaning.

Unlike with some other useful concepts, this isn’t one that science fiction has done much imagining for us. Presumably, the real-world galaxy does have regional differences that have some importance, but since they don’t presently have much practical effect on us, there aren’t a ton of stories that explore that as an idea.

image

The landable planets themselves do a decent job at this: while there’s a fair share of pock-marked rocks, you do run across things like this Mars-ish planet that has a recent, massive impact crater that affected most of its hemisphere.

image

And it is impressive to be able to glide down towards it, watching it gradually fill your field of vision.

Though features that big are hard to see when you land in the middle of them. 

image

Tooling around on the planet surface is solid, though I’m a terrible enough driver that I’ve learned to avoid tight crevices. (the SRV is rear-wheel drive, so reversing is sometimes enough to get it unstuck.) I really like the indirectness of the waveform sensor. Indicators that give vague directions are a good way to interact with procedurally generated spaces: they keep you from getting utterly lost, but encourage you to pay attention to the space as you travel through it.

I haven’t noticed any significant regional gameplay differences, though. The inhabited bases are mathematical points lost in the geometric vastness. Presumably, different parts of a planet might have different resources, but in practice there’s mostly a different distribution of rocks. The scale works at the local level and the planet level, but the middle octave doesn’t have much differentiation. 

image

On the galactic scale, there are a few regional aspects. There’s the bubble of human-settled space, of course. Recent updates have added some mysterious artifacts of possibly alien origin, which seem to be in certain regions. I hope they add more things like that. But, more fundamentally, the critical thing for galactic exploration is fuel.

You can only harvest fuel from certain stars–mostly main sequence ones that aren’t too cold or hot. And the maximum distance you can jump between stars is limited by the characteristics of your ship. Therefore, any feature of the generator that affects distance or stellar composition will also directly affect an explorer’s experience of galactic topology. The gaps between the galactic arms and the vast regions of young stars become significant obstacles to be overcome.

And, of course, the sky itself can change significantly if you travel far enough…

image

Which means that the galaxy does have a few landmarks: black holes, pulsars, the spiral arms, and especially nebulas. Real-world nebulas can be visible from hundreds of light years away, so naturally the in-game ones serve as useful reference markers.

Still, it’s something worth thinking about if your generator is creating a lot of space: are there enough different scales of variations going on? As the player zooms in and out of the map, are there points where it just looks like noise, or are there distinct structures at multiple scales? Does the player have a reason to care about what’s over the next hill, or the next hundred hills? Are there landmarks that the player can use to navigate? 




The Forever Game (and why there isn’t one)

When I talk about procedural generation with people, they often mention the idea of a game that you can play forever.

This is especially topical, since No Man’s Sky releases this week and the dream of The Last Game is written all over it. Any vast procedural system seems to accrue the idea that it can be played forever. I’ve heard it about Spore, NetHack, Elite, Minecraft…

This isn’t a new idea. Some of the earliest computer games were inspired by the idea of using the computer as a virtual dungeon master. The first D&D based dungeon crawl games were written within a year of the publication of those early faux woodgrain boxes. Roguelikes followed soon after: the idea of a game you can play indefinitely has a very strong pull.

My personal model of videogames is that they are a physical interface to a metaphysical, mathematical space. The rules are our bridge between the material bits that we directly interact with and the abstract ideas and systems they describe. But they’re also anchored by those rules; they can’t exceed them. Even in an infinite space, the rules will be a smaller and more easily learnable. Humans are really good at pattern-matching.

image

Even with an infinite play space, the rules that you are learning are finite. Videogames are nested combinations of emergent systems and progression paths, and even complex systems can eventually be black boxed. You will learn how the generation works, and it will no longer surprise you.

Which is not to say that a virtual dungeon master is useless or impossible. As Warren Spector has pointed out, an AI that can help construct an experience would open up a lot of design space. And there’s a lot of research in that direction–it’s one reason why I’m always excited about NaNoGenMo

But I’ve generated potentially infinitely long novels, and it’s not all that it’s cracked up to be. I could easily have made the output longer, but it wouldn’t have been any more interesting. Indeed, one of my goals for this year is to have just a couple of readable chapters.

Minecraft has the same number of possible seeds as No Man’s Sky has planets (2^64). No Man’s Sky has a larger expressive range, but Minecraft’s player constructions have a range that’s vastly larger still. Because there are generators that are larger than a human can explore: creativity and the universe itself. 

image

It’s not the size of the generated space that counts. Minecraft remains popular because its procedural generation speaks via the same flexible blocks that the players can use to talk back, and the vibrant modding community is hard at work extending it. 

(That’s the same reason that I like games that have ongoing development: human developers adding features tends to be much more effective than even the best current algorithm.)

Someday, we may have a creative machine that can provide general-purpose new gameplay on the fly. But there’s a really easy way to get brand-new, completely different gameplay: play a different game. In terms of abstract cost-effectiveness, it’s likely that it’s always going to be cheaper to make two completely different games than it is to make one system that can surprise you equally well. 

And there are games that don’t need to go on forever. Kentucky Route Zero is my favorite game of all time, but I don’t talk about it much here because it doesn’t have a lot of procedural generation. It doesn’t need it. (If I was writing the interactive narrative blog I’ve threatened to start, half the posts would be about KRZ.)

Don’t get me wrong–procedural generation can be very exciting. But I value it for the way that its flexibility enables more robust systems and more fully-realized worlds. Or the way that it can surprise you if you let it have enough expressiveness. And even when I like it for its infinite reaches, I don’t think that it is ever going to result in the last game I ever need to play.







DeepWarp: Gaze Manipulation

Continuing our infinite series on things you can do with neural nets, DeepWarp is a research project that can change which way the eyes in a photograph are looking. This is more than just tweaking the iris and pupil: because it learned what moving eyes look like from being trained on photographs of real eyes, it also knows how to change the eyelids and the subtle muscle changes around the eyes. 

I’d like to see what this could do for game characters, of the old-school character-portrait-in-the-interface type, like Dungeon Master and Doom. Paint a bunch of character portraits, hook them up to a facial-expression generator, and have a wide variety of emoting to react to what happens in the game.

http://sites.skoltech.ru/compvision/projects/deepwarp/









Font Manifolds and Flexibility

One of the most useful aspects of procedural generation is flexibility.

Being able to output variations that smoothly transition between states and are generally coherent is often more useful than an infinite collection of more unpredictable stuff. Flexible generation gives the user more control. Further, if the user is another generator, it makes it easier for that generator to just request an output and expect a sensible result. 

For applications like procedural animation, this allows complex hand-authored state machines to be replaced by a well-trained neural network. For applications like this font generation, the aim is to allow end-users to tweak fonts without having to start from scratch with an expensive, complex font editor.

The basic principle behind this research by Neill D.F. Campbell and Jan Kautz is the mapping of fonts into a multidimensional manifold. Each point on the surface is a unique font, extrapolated from the example fonts. You can trace smooth transitions between different looks, or find the point that has the font that’s just a bit different in exactly the way you wanted it to be.

If this technology goes mainstream, we’ll likely see a lot of bad, inconsistent fonts–but the rise of desktop publishing already let that cat out of the bag twenty years ago. As the technical barriers fall, the aesthetic training of the users becomes more important.

And it’s not like this is the first computational manipulation of fonts: it’s already vastly superior to the cheap way to make pseudo-italics. I expect that, implemented appropriately, professional font-makers will appreciate the assistance for creating new weights or variations, provided that they’re able to tweak the results.

It also enables some tricks that were nearly impossible before, like transitioning a word from one font to another smoothly along its length. Which isn’t something I’d recommend as a regular thing, but plenty of art has come out of abusing systems that weren’t meant to be used that way.

http://vecg.cs.ucl.ac.uk/Projects/projects_fonts/projects_fonts.html




Reflectance Modeling by Neural Texture Synthesis 

It’s that time of year again: SIGGRAPH was last week, giving us a ton of new computer graphics research to be astounded by.

Like this research by Miika Aittala, Timo Alia, and Jaakko Lehtinen: using a single image, a neural network can use parametric texture synthesis to model the reflectance of textures.

If you’re not a computer graphics artist, you might be wondering what the big deal is, so here’s what’s going on:

image

Describing a material to be rendered involves both the color and the way that it reflects the light. There’s a bunch of ways to approach this, but one common way is to break the texture into its albedo, the basic diffuse color; specularity, the degree to which it reflects its surrounding environment; and glossiness, how mirror-like or scattered that reflection is.

Typically, when you try to photograph something to use for a texture, you want to try to get a photo with the lighting as flat as possible. Any reflections or shadows won’t match the lighting you later add, so the photo needs to have little of it as possible. This is difficult, and the work it takes to get a clean diffuse map and the matching reflection data means that it’s often faster to create the textures from scratch or procedurally generate them.

This is difficult, and the work it takes to get a clean diffuse map and the matching reflection data means that it’s often faster to paint the textures from scratch or procedurally generate them.

An image like this one would be terrible for the usual approach, because of the giant hotspot of a reflection in the middle:

image

But with this research, that actually helps. Feed this single image into a texture synthesis neural network, and it uses the difference in reflection to learn the optical properties of the material. So you can turn that into this:

image

Which I think is both a clever idea and a pretty convincing result. I hope this is the kind of thing that will feature prominently in future artistic tools, doing the heavy lifting so we can focus on the image-creation.

https://mediatech.aalto.fi/publications/graphics/NeuralSVBRDF/