Expressive Temperature
This post by Robin Sloan is interesting for three reasons.
First, it explains sampling temperature. This is a common term in machine learning: low temperature produces the most typical, cohesive results, and a high temperature produces the most novel but least likely to be cohesive results.
One of the points I’ve tried to make over the past year or so is that the distributions we use to generate things is a frequently overlooked way to give more interesting results. The same generator, with different temperature settings, can produce very different results. You can expand the expressive range of a generator by viewing the generator properties as variables that can change over time and space.
Second, the post is about varying the temperature within a single generated artifact. We don’t have to use the same distribution for the entire thing! Consider a map generator for a fantasy RPG: maybe we set the temperature low in the middle of the map so the player starts out in familiar terrain, and then increase it at the edges, so the really wild results are encountered the further the player gets from the start.
Third, Robin uses the generated samples as part of a collaborative composition process, where both the computer and the human composer have a say. This kind of centaur workflow is often more interesting to me than fully-automated generators, because:
- This is a more practical use of procgen: humans have plenty of tasks that they could use a little assistance with, and relatively few that AI can handle on its own.
- It sidesteps the hard problems that we already have good human solutions for and lets us work on the problems that machines are better at. Humans are great aesthetic pattern matchers but bad at serendipity. Computers are the opposite.
- Constructing the interface language between the human user and the machine helps think through what the generation problem actually is.
- And if we do want to fully automate it in the future, we have obvious places to start work.