The What-If Machine

The WHIM research project aims to create software that can come up with ideas, of the kind that you can use to write a story or a joke. WHIM stands for “What-If Machine”.

You can play with Version 1 over here, and generate metaphors like “What if honored warriors were to be discharged from their armies, hatch plots and become contemptible traitors?” or plots such as “What if there was an engineer who woke up in a doghouse as a dog but could still fly?” or “What if there was an ox who lost her horn, which she used for music therapy, so decided instead to use a harp?”

But that’s only one small part of the project, which has spawned quite a lot of research. The project as a whole is part of computational creativity, which is about both trying to make a software that can be creative like a human and exploring ways that software can help humans be creative.

http://www.whim-project.eu/

image



Miegakure

As far as I know Miegakure doesn’t use procedural generation for gameplay or level generation (not that the four-dimensional gameplay needs it) but it does use parametric models for a vital part of the game: the four-dimensional objects that intrude into 3D space are procedurally modeled because there are no 4D modeling apps. It’s much easier to build them parametrically.

(via https://www.youtube.com/watch?v=vZp0ETdD37E)




Build the Cities

Raven Kwok describes this music video as “a code-based generative music video” programmed in Processing for the track Build the Cities by Karma Fields Featuring Kerli Kõiv.

Raven’s description:

The entire music video consists of multiple stages. The basic structure for each stage is a dynamic subdivided cubic cell, which is able to multiply based on a designated distribution pattern.

For generating the animated singing figure (Kerli), the pattern is computed based on depth sequence of the original footage. Speaking of the footage, a Kinect, a camera, plus Depthkit (depthkit.tv/) were used to shoot both the RGB and depth footage simultaneously. Since I was not using Depthkit’s built-in visualizer, an additional program was later developed to post-sync the two footage based on the milliseconds tags of the depth sequence.

For generating the cityscapes, I programmed another separate generator to produce images of random aerial views of buildings, using brightness to indicate each block’s altitude. The images were later imported and read by the system the way similar as Kerli’s depth sequence. The mapping of the pattern is also affected by each host cubic cell’s “gravitation mode”, which changes the pattern’s facing direction.




More on neural-doodle

Alex J. Champandard wrote a blog post that goes into more detail about neural-doodle, plus this video of it in action. 

If you want to try out the earlier DeepForger, you can send a tweet with an attached photo to @DeepForger.

http://nucl.ai/blog/neural-doodles/




How to Think About Bots

This botfesto is a statement by a bunch of bot creators about how we interact with bots. Not just amusing twitter bots, but to algorithms and programs that glue much of our present world together.

One aspect they talk about is that some bots may be more approachable if we don’t always see them as human. But even so, the semi-autonomous nature of bots raises ethical and design challenges.

Bots have been used to predict the spread of disease, make political points, amuse readers, perform journalism, keep an eye on Congress, and spread propaganda. Bots have bought drugs and accidentally made death threats.

There’s been a lot of bloviating thinkpieces about the threat of future Artificial Intelligence, or the trolley-problem ethics of self-driving cars, mostly by people who apparently haven’t ever tried to build AI or interact with a bot. Meanwhile, the actual threats and opportunities in our semi-autonomous software are overlooked. In contrast, “How to Think About Bots,” takes a long look at the bots that are presently among us, some of which may have already shaped your life.

http://motherboard.vice.com/read/how-to-think-about-bots






neural-doodle: Semantic Style Transfer

I’ve talked about StyleNet and related neural network image generation before, but the level that Alex Champandard has pushed it to is mindblowing. Doodle a simple composition and the algorithm can turn it into a completely different style.

Right now, it’s only been tested on landscapes and portraits, but the potential just in that area is pretty immense. With a public release in the same week week that AlphaGo won the first game in its match against Lee Sedol, computers seem to be finding new ways to replace humans.

So what does this mean for artists? Now that a Photoshop filter to create a Renoir landscape isn’t too far fetched, is there still a place for human artists?

My answer, as always, is that of course there is. Photography didn’t remove human artists. New technology has always been incorporated into painting, from better chemicals, to reference tools, to new ways to think about the world, to reactions that look for the things that the technology can’t do.

But it will change art, especially the commercial production of art and the signaling bound up in fine art painting. Will we view Van Gogh the same way when the Van Gogh filter is all over Instagram? How will this affect the kinds of painting that people are talking about? And, most interesting to me, what new opportunities will this create for artists?

image

As with every new tool, only some artists will embrace this particular software.

With tools like this, the artist becomes Kasparov’s centaur, a melding of human and machine. Garry Kasparov has also referred to using a computer this way for chess as “Advanced Chess,” and others have called it “Formula 1 for Chess.” You’re still playing the game, but the rote work has been automated, freeing you to find your personal expression.

Using computational tools to make art often feels like the paint guns in Strangethink’s Joy Exhibition. It takes a combination of skill, perseverance, and luck to find the result you envisioned in your head. 

What this definitely won’t do, though, is eliminate the fundamentals. A discipline I learned a long time ago is that sketching out your ideas before you start playing with the machine can help you identify the weaknesses of the software and not be limited by them. It’s sometimes too easy to get stuck on a local peak, because the path lead you there, not realizing that just beyond the fog is a mountain more beautiful and terrifying than you imagined you could imagine. A solid grounding in the basics can help you avoid getting stuck there.

On the flip side, if you’re working with software it’s often useful to learn to speak its language and learn what things it makes easy, and what it could do easily if you just have the right perspective.

I’m looking forward to what happens with neural-doodle in the future. It’s the exact kind of new tool that will inspire that one artist who is dazzled by its possibilities and uses it to make something new that the rest of us have yet to glimpse.

The source code for neural-doodle is on github. You can also read the research paper, with lots more pictures.






Creating FPS Open Worlds Using Procedural Techniques

Tom Betts’ GDC presentation about the map generation in Sir, You are Being Hunted is now available for open viewing.

There’s quite a lot of technical and theoretical information included. Whether you’re looking for insights into to process of designing a complex procedural generator or for technical details (including some code) this is a talk that’s worth watching.

http://www.gdcvault.com/play/1020340/Creating-FPS-Open-Worlds-Using






Procedural Snake Eyes (Michael Cook)

Michael Cook takes a look at the way procedural generation interacts with other game systems, using Invisible, Inc. and XCOM2 as a lens.

He touches on an important distinction that’s come up before–that procedural generation is not random–and then dives into a discussion about how both games handle unpredictability. Specifically, about how the game handles the extreme results, when the generator fails and produces something at the edges of the possibility space:

Why is this such a big deal? Because sooner or later, your content generator is going to make a mistake, and it’s going to make the player feel like you’re not playing fairly. The role of a level generator in a strategy game like this is almost like a DM in a pen and paper game: it’s setting up a scenario, and asking if you can solve it, and the implication here is that a solution exists because the DM is really there to make sure you’re having a good time. When a generator messes up, it feels like you’re being treated like a fool – the DM is either stupid or vindictive, throwing down eight dragons and smirking at you when you protest.

How the rest of the game handles this determines the frustration the player experiences when the generator throws something more difficult than expected at them. Michael references the way that Spelunky plans for the worst result and gives the player enough bombs so that a total failure of the generator is still a fun challenge. 

I’d also throw on Crypt of the Necrodancer and the way that its digging mechanics soften the excesses of the generator and make how you alter levels into its own kind of side puzzle.

He also mentions the problem that just testing levels can only ever give you a fraction of the possible results. I’m looking forward to more information about the analysis tools that he’s working on.

http://www.rogueprocess.run/2016/03/03/procedural-snake-eyes/




The 2016 7-Day Roguelike Challenge

The 2016 7-Day Roguelike Challenge is starting today and running until the 13th. What roguelike are you making for it? Spotted any projects that look fun?

http://7drl.org/




Into the Black: On Videogame Exploration

While this isn’t strictly about procedural generation, I think the general points about exploration and videogames are applicable.

Not just in describing a way to think about exploring the procedurally generated spaces we’re creating–do we need to add collectables, or is the journey itself enough?–but also in our exploration of procedural generation itself. 

Does procgen always need a goal? Does every procgen project need to be a game, or have some practical result? I think I’ve made it clear over the past year that I think the answer is no–sometimes procgen is interesting in its own right. While I’m all in favor of combining projects into more complex systems, Tamperdrome Collection, Mansion Maniac, or Ordovician are complete in themselves and don’t need anything as tawdry as a goal. 

It’s also why, while an infinite procedurally generated storytelling system that can tell any story would be fascinating, I don’t think it’s the holy grail of procgen (or videogames, for that matter). It’s another dimension of the seduction of infinity: sometimes it’s better be about something, rather than to be about everything. Your procgen project doesn’t have to be all things to all people, and you can procedurally generate a story without needing to procedurally generate all stories.

(This is also an excuse for me to mention Bernband, which has no procedural generation but is nonetheless delightfully atmospheric.)

(via https://www.youtube.com/watch?v=TGxQzbCuh2M)