Artistic style transfer for videos

I keep talking about style transfer and related tech, for good reason: it’s maturing at a rapid pace. For example, this research by Manuel Ruder, Alexey Dosovitskiy and Thomas Brox on applying it to video.

There’s a ton of exciting stuff like this going on right now. It’s one reason I’m looking forward to seeing what comes out of the procedural generation track at the upcoming nucl.ai conference in July. I won’t be able to be there, but it looks like some really interesting stuff is going to be presented this year. (Including Neural Doodle!)

I expect that within the next two to three years AI-enabled technology is going to become part of a standard workflow for many commercial artists, particularly those working in VFX and videogames. It’s not going to replace artists, but we’re going to have a lot more artistic centaurs. Many of whom are going to find uses for these tools that the original researchers never anticipated.

Here’s another video with some of the more technical details for the video style transfer research:

(via https://www.youtube.com/watch?v=Khuj4ASldmU)







Norman’s Sky

While we’re waiting to see if No Man’s Sky lives up to our expectations, Ivan Notaros (who also made Library of Babbler) went and made Norman’s Sky for LOWREZJAM. 

Exploring a universe in 64x64 pixels feels very Noctis-like. Every dot you see in the tiny, pixelated sky is a place you can visit, with seamless planetary landings. 

image

That explorability is one of the things I like about procedural generation. You can make a space where there’s no backdrop, where even distant things are real objects. 

image

It’s not the only way to get the effect, and you still have to take into account that there are going to be metaphysical limits to your interactions even if there are no physical limits. But the low-fi aesthetic here helps that: you don’t expect ultra-realism in a teeny window.

Ivan’s been talking about a possible continuation of the low-fi procedural space exploration, with maybe a few more pixels. Sounds like a good idea to me.

https://nothke.itch.io/normans-sky







Listen to Wikipedia

Do not just look at these GIFs. You’ve got to hear this to get the full experience: Listen to Wikipedia

The way it works is to watch the edits being made on Wikipedia and playing notes in response, as it happens live. Stick it up in the background and listen to the data.

I really like this kind of thing, for a multitude of reasons: it uses an unusual input as an effective way to create structure; it gives me a completely new way to understand the data; and it creates listenable music that’s totally generated but still has a very human connection.

Listen to Wikipedia was built by Stephen LaPorte and Mahmoud Hashemi, and you can peruse the source code on Github.

http://listen.hatnote.com/




PANow - Paul Weir - May 2015

A presentation by Paul Weir all about procedural audio. Paul Weir is the audio director and sound design for No Man’s Sky. Generative music is one of his big interests, and the presentation covers a lot of examples from the past twenty years of games. Some of the systems he worked on, some of them are ones he likes or finds influential.

Among the things discussed: Ballblazer, LucasArt’s iMuse, Ghost Master, Spore, Thief 4, the challenges of composing generative audio, the inner workings of some of the systems, retail stores using generative music, and generative music tools.






What do you get when you feed geocities into an undercooked neural net?

Geocities Forever

Take the preserved geocities pages from oocities.org, put them in a blender, and feed them into a smallish torch-rnn neural net. Take it off before it’s fully cooked because GPU time is expensive and its glitchier and funnier this way. Serve as a webpage that gives you a totally new generated geocities page every time you refresh the page and click “Enter”.

I am reliably informed that we can blame @aanand for this project.




Interactive Sketching of Urban Procedural Models

This research, presented at SIGGRAPH 2016 by Gen Nishida, Ignacio Garcia-Dorado, Daniel G. Aliaga, Bedrich Benes, and Adrien Bousseau, is a fascinating approach to creating 3D buildings. I find it interesting not just because of the results, but because of what it suggests about the future of creating things.

Expect these kinds of interfaces to be more common in the future. It doesn’t replace general-purpose 3D modeling, since it can only create things it can find a grammar for. But for specific purposes, this is great. And it suggests some ways an interface like this could be introduced for sculpting. If you could build a library of generalized geometry grammars, plus modifiers and gestures for actions to change the model, it could open up new ways to sculpt 3D objects.

It’s also interesting because the results are parametric, and can be adjusted afterwards. Adding intuitive interactive interfaces to these kinds of generative grammars opens up many possibilities for artistic expression.

You can read up on their research here.

(via https://www.youtube.com/watch?v=rn4T9Y9PbgQ)








Codeology 

Codeology is a visualizer for open-source code on GitHub. It analyses the source code for the project and displays it as an ascii-rendered 3D form. A 21st-century cyberpunk realization of code. 

This kind of project is a neat use of applying an unusual input to a generative output. There’s even a practical use-case for visualizations such as these: humans can recognize the generated patterns more easily than they can remember arbitrary names.

The visualizations above are of my last NaNoGenMo novel and my play-by-email game management suite.  What do your projects look like?

In-Browser: http://codeology.braintreepayments.com/

Source Code: https://github.com/project-codeology/codeology











Color-Wander

Matt Deslauriers created this generative art program as a weekend project. Built with Node.js and the HTML5 canvas, it 

You can view it in your browser (right click to download the image!) or look at the source code on github.

Matt’s blogpost about the project goes into some detail about it. I found the use of a distortion map to provide structure to be inspiring, and the crowdsourcing of the color palettes is an interesting way to provide structure.

It’d be interesting to take some of these techniques and combine them with other approaches. What happens when you use an animated Deep Dream image for a distortion map?






Opulent Artificial Intelligence 

a manifesto by Galaxy Kate 

At the Lost Levels unconference this year, @galaxykate0 handed out manifestos in zine form. 

The central thesis:

I want to coin the term “Opulent AI” for AI that takes up all of the resources it wants, takes all the attention, and makes the experience all about itself.

FOR NO PRACTICAL PURPOSE WHATSOEVER

AI is often regulated to unobtrusive background plumbing, so GalaxyKate calls for AI that deliberately calls attention to itself. The tiny zine is jam-packed with examples of Opulent AI, some suggested exercises, and more.

I think it’d be great if you make your own opulent artificial intelligence. It doesn’t have to be complex: the zine mentions Braitenberg vehicles and Tracery bots. Do you have an example of opulent AI you’d like to show off?



ProcGen blog

This blog is dedicated to Procedural Content Generation aka Procedural Generation aka PCG aka ProcGen, mainly in the context of Game Development.

For the last year I have been finding out more and more about this technology from the game industry’s point of view. Talking to people and researching. In this blog, I will talk about a few of my conclusions.

Submitted by Francisco RA


I was sent this submission about a new-ish blog. Always happy to see more people talking! One of the ongoing series of posts that they’re running is an attempt to classify ways that procedural generation can be used, which I find can be a useful thing to think about for people who are trying to figure out how to use procgen. 

There’s a new post over there today about deterministic generation. You should check it out!