Generating Faces with Deconvolution Networks

You remember yesterday’s Neural Photo Editor? The interactivity was great and all, but the pictures were kind of small and mushy. Is there ever going to be a neural network that has actually convincing detail?

If you hadn’t guessed it by now, the answer is that there already are: here’s an example of one. Using a dataset with higher-resolution images and the usual clever processing, it generates new faces and expressions with a high degree of detail.

Inspired by research into generating chairs with convolutional networks, Michael Flynn created this project. There’s a blogpost talking about it in more depth, but what I wanted to focus on was how it demonstrates flexibility.

While you can feed invalid inputs into it, most of the unit-length inputs result in reasonable faces and emotions, and can smoothly interpolate between them. Being able to transition between states (or pick something in-between) is a very useful property for any generative algorithm. And, of course, the quality of the results is only going to improve going into the future.

Despite that, I also like the glitchy nature of the attempted rotations; it’s not a look you can get otherwise. I do hope that there will be room in future research for some of these oddball side effects. You never know what weird look will be the foundation of a future aesthetic movement.

The code is on GitHub: https://github.com/zo7/facegen

(via https://www.youtube.com/watch?v=UdTq_Q-WgTs)