Experimental music video made with neural face generator 

Mario Klingemann has been working on a face-expression neural net face generator that uses pix2pix to translate simple facial expressions into generated images. And then he fed a music video into it.

The resulting hallucinatory video has its moments of lucidity when the lip-syncing almost works as well as bits of more bizzare imagery. But, although the data makes many of the movements jumpy, the space itself is fairly consistent: completely different faces, but matching expressions. (And, as Mario points out, more open mouths would make the lip-sync work better.)

My expectation for the near-term artistic use of neural nets is something along this line, at least conceptually. Given a data space (faces, in this case) you introduce a structure (the face markers) and a concept (the expressions extracted from the song).

I expect that the visual aesthetics will be fairly varied, since different techniques can have radically different styles. But the framework behind them will generally fall along those lines.

Meanwhile, we have this. And I have to admit, it’s less disturbing than Mario’s earlier pose-generation tests.

https://www.youtube.com/watch?v=-elV0lT5-H4