Neural Enhance

You’ve seen it on TV: magically taking a small, blurry image and enhancing it so you can see the details. Now it exists.

Implemented by Alex Champandard and inspired by several other people’s research into using neural networks to recover details, Neural Enhance appears magical. Of course, the details it adds don’t actually exist in the original photo: it’s just hallucinating the most likely pixel values based on what it sees.

There’s an online tool for viewing enhanced images that you can upload your own photos to.

Neural Enhance has already gotten a lot of attention, despite only being online for a short while. Which makes sense: even in its current state it’s a powerful tool. I anticipate that it’ll eventually become a common part of the pipeline for postprocessing the result of other algorithms. Why generate your image at 4K when you can generate it at a quarter of the size and upscale it with Neural Enhance?

Of course, the algorithm isn’t perfect. Since it’s making up the details it adds, it can occasionally get things very wrong. The better (and more constrained) the training data, the better the result. I can easily see it a custom dataset being built for, say, post-processing smoke simulations. All of the fiddly little details in a fraction of the time. Alex Champandard has already tested it on videogame screenshots, which work really well. 




Wave Function Collapse in Unity

WaveFunctionCollapse is the the hottest new general procedural generation technique right now. (And it can write poetry.) It’s already been ported to C++, the JVM, Javascript, Rust, and now Unity, just in time for ProcJam.

Props to @ExUtumno for the original implementation, and to Joseph Parker for the Unity version.

Unity Wave Function Collapse: https://selfsame.itch.io/unitywfc

(video via https://www.youtube.com/watch?v=CTJJrC3BAGM)











Owl Generator

Russell Fincher made this owl generator for Halloween.

And what more needs to be said? These are delightful owls. Seems like the perfect thing to submit to ProcJam.

You can download Owl Generator here: https://drive.google.com/file/d/0B65j6EIp4zMQMkNuLXZrdEpsQmM/view




NaNoGenMo 2016

National Novel Generation Month 2016 officially starts today. 

I’ve talked about NaNoGenMo a lot before, but I think an introduction is in order for people who are new to it or who want to learn more about it. 

So what is it? NaNoGenMo is a way for people to get together and make a thing that generates a novel, using NaNoWriMo’s definition of a novel: 50,000 words.

The way it’s organized is a bit unusual: it uses GitHub, a site for sharing open source projects, as a forum. Specifically, all of the threads are in the Issues section. There’s quite a few threads started for this year already, as people announce projects.

Each year, there’s a Resources thread with links to things that may be useful. Previous years have included links to all kinds of useful tools; if you’re at all interested in text generation they’re worth reading.

People have made all kinds of novels as part of NaNoGenMo:

People have used all kinds of tools to make novel generators. While Python is popular, due to the natural language libraries available, people have used tools ranging from the very complicated to the very friendly: Tracery is a very accessible way to get started with text generation, while at the other end people have build complicated neural-networks. 

While most of the novels have been in English, there have also been works in Armenian, French, Spanish, Finnish, and a procedurally-generated language.

I’ve participated every year; last year’s project was Virgil’s Commonplace Book. I wrote about it’s development on the project thread.

I look forward to seeing what everyone creates this year!

https://github.com/NaNoGenMo/2016 



Excellent question! While I can think of a few techniques you could demonstrate, other people have already come up with some great resources, so I’m going to mention them first:

I think Kate Compton’s So You Want To Build A Generator is a great starting point for this: a good overview over common approaches, and a discussion of possibilities. (Also available as a PDF).

Her ProcJam2015 talk is also relevant, particularly the practical advice about data structures and using an array of numbers to connect generators together.

Speaking of Kate, her cut-and-play suppliment for Seeds vol. 1 is a generator idea generator, which is a fun way to come up with new ideas to explore.

Casey Reas, one of the creators of Processing, built a series of artworks out of simple, elementary parts. Processing, in general, has been used for a lot of generative things. The recently re-launched openprocessing.org has a ton of examples for inspiration.

Anders Hoff’s ongoing essays On Generative Algorithms and Shepherding Random Numbers are also great because they have breakdowns of the artistic process behind creating new generators, showing how concepts are built up from basic parts.

For some specific use cases, here’s some ideas:

You may also want to look up the work of pioneering computer artists like Frieder Nake and Vera Molnár.

In addition to Processing, Tracery and Cheap Bots Done Quick are some accessible tools for people to use to start creating their own generators.

Dungeon generation is its own, giant subject: creating better dungeons with cycles, lock-and-key systems, or more narrative approaches are a few of the possible improvements.

And don’t forget to check out the history of analogue generative systems, making a generator out of paper, pencils, and dice can be an accessible, hands-on way to introduce the concepts to people.

Hope this helps!




Casual Procgen Text Tools

It’s nearly time for NaNoGenMo and ProcJam, so this is an excellent time to mention Emily Short’s recent post about tools for text generation. The initial spark was the idea of a tool to help assemble a corpora, but the post dives into a whole host of related ideas, including scraping the web for new data, ways to collaborate with the machine to find new ideas, using Wikipedia lists, filtering, things you can do to grammars, and a bunch of other ideas. 

Whether you’re looking for pointers to tools to use for text generation or you’re looking for ideas for tools to build, there’s a lot of discussion to be had.

Her ProcJam talk about the aesthetic tarot for The Annals of the Pariagues is also worth watching, no matter what kind of generation you’re making.




Seeds, vol. 1

Seeds is out! The ProcJam zine, organized by Michael Cook and Azalea Raad and edited by Jupiter Hadley is available for download. With 106 pages of things making things and other procedural talk, plus a bonus a cut-out section for Kate Compton’s Generative Fun Corner. (Ideal for coming up with your own ProcJam idea.)

It covers such topics as: Why is music like a spring? An easy way to generate fictional alphabets. A postmortem on generating towns. Growing self-representational life forms. Combinatorial literature. Being less random. Gardening games.  A couple of my own contributions.  And a lot more.

A whole lot more: I haven’t finished going over the whole thing in detail yet. I expect I’ll be finding new ideas from it for quite some time.

http://www.procjam.com/seeds/




emoji2vec: Emoji to vector

The techniques that created word2vec don’t stop with words: there’s a derived doc2vec that works on larger blocks of text, for example. But the topic for today is training the algorithm on emoji. 

This research by Ben Eisner, Tim Rocktaschel, Isabelle Augenstein, Matko Bosnjak, and Sebastian Riedel trained on the emoji descriptions from the Unicode standard. This turns out to be sufficient to get a good result, and it means that you don’t have to sample millions of tweets to get a dataset with enough emoji.

Here’s a map of the emoji-vector-space:

image

https://arxiv.org/pdf/1609.08359v1.pdf











Imagegram 

Imagegram is a grammar for procedural generation, specified with images

While there have been many logic or rules-based approaches to make generators based on rules, having the entire specification for a pixel-based 2d grammar as an image makes it much more accessible.

Made by Guilherme S. Tows (probably best known for Eversion) with a few of the example grammars by Kevan Davis, who has done quite a bit, including a NaNoGenMo novel.

The instructions are quite concise:

image

You can, of course, add your own rules, either with the in-browser editor or by uploading an image of your own. 

There’s a lot of scope here to play around with, and it’s a good example of how changing the context of a known approach and providing a different set of tools can open it up to new creativity. 




word2vec

NaNoGenMo 2016 is just around the corner, so what better time to write about text generation? In this case, it’s a tool that was invented in 2013: word2vec.

The basic concept is pretty simple: take a bunch of text and learn a vector representation for each word. Words with similar meanings have similar vectors, and more interestingly, doing math with them corresponds with some of the linguistic meanings: Paris - France + Italy = Rome. 

The exact results depend on the training data, and it doesn’t always capture the exact association that a human might: with the Wikipedia corpus, (Minotaur - Maze) + Dragon gets results like “SimCity” and “Toei”, though (Minotaur - Labyrinth) + Dragon = “Dungeon”. When you’re using it for generation, it’s often a good idea to get a list of the closest vectors, and then pick the best result based on some other criteria, or weighted random sampling.

The original research paper and the code from the Google project can be found online, but there are many other implementations, such as a Python one in gensim. Here’s a post with more information and tutorials.

There’s also some in-browser implementations you can play around with. This one uses the Japanese and English Wikipedia as corpora, and this one is implemented in Javascript.