Procedurally generating a narrative in Forest of Sleep

As a rule of thumb, I don’t like to talk as much about unreleased projects, since there’s no shortage of released procedural generation projects and I don’t have as much to say when I can’t experience the work personally. (No Man’s Sky notwithstanding.) Plus, I prefer to take my own screenshots if possible.

The big exception is where people write about how they’re approaching procedural generation. Such as this article about the storytelling generation in the upcoming Forest of Sleep, from Ed Key (of Proteus) and Nicolai Troshinsky. What interests me here, as someone who has just spent a lot of November working on a novel generator, is the specific ways they’re creating the context for their stories. 

They’re using visual elements to imply rather than explicitly state parts of the narrative. Taking advantage of lacuna to invoke the player’s pareidolia can sometimes be simpler with images and animation, since visual grammars are looser than grammar in written language. 

But the key here seems to be the reincorporation. The system can’t understand everything with the sophistication of a human storyteller: but if it can remember the elements that it is capable of tracking and deliberately invokes them again, it can make the most of what it has. 

When I talk about how design approaches are sometimes more useful than AI approaches, this is the kind of thing I mean. You don’t need a magically intelligent storytelling AI to get a better result. While we’d all like better algorithms, to tell a procedurally generated story you really just need a smarter way of using the algorithms we already have. Many of the most successful NaNoGenMo entries have come from taking existing algorithms and either finding new ways to combine them or clever ways to justify their output and give them context and framing.

http://www.gamasutra.com/view/news/259455/Procedurally_generating_a_narrative_in_Forest_of_Sleep.php