A Deep Learning Framework for Character Motion Synthesis

I don’t think that the human-machine collaboration that I’ve been talking about is limited to one form of collaboration. It’s a continually-renegotiated partnership.

Daniel Holden, Jun Saito, and Taku Komura presented their animation synthesis research at SIGGRAPH 2016. We’ve seen image synthesis, shape synthesis, text synthesis, and Magic synthesis before, but it’s interesting to see it applied to something more abstracted from the final result. 

Motion-capture, like rotoscoping before it, was heralded as the end of the animator, letting the cheap machines replace the expensive human labor. Reality turned out to be considerably more complicated. Motion capture can’t choose to include beautiful smears, for example. The artistic input is still necessary.

I think that while motion synthesis will replace some grunt inbetweening work, it also opens up the flexibility for animators to exert an immense amount of control over both motion-captured data and hand-animated performances, seamlessly combining them in whatever way makes the most sense.

(via https://www.youtube.com/watch?v=urf-AAIwNYk)