CycleGAN two model feedback loop
See, a thing I get really excited about with neural nets (and other forms of procgen) is when they output something that we couldn’t have gotten any other way. No matter how bizzare the output, what we’re seeing is something new and alien.
Lately, Mario Klingemann has been experimenting with feeding neural nets (Mostly CycleGAN) into each other. The published results have that alien quality I value: mostly coherent but in a way that no human would have chosen, let alone manually drawn.
Two uses of generative art: bringing to life something that I can picture but would take far too long to do by hand, and creating something I that I would never have been able to imagine on my own.
There’s some overlap here: making a convincing forest, for example, requires a lot of little details that would take a lot of effort for me to imagine by hand. Like most humans I am bad at randomization, and the little details of moss and undergrowth can sometimes be done more effectively with a good generator that mimics the natural generation. But those aren’t quite as abruptly startling as the images that are literally impossible for me to have pictured before I saw them.
These videos are examples of both of these uses: the process would have taken far too long to do by hand, and the nature of the process is one that I wouldn’t have come up with without first seeing it here.
Of course, as we become familiar with the imagery it becomes part of our thinking. Maybe not the exact process–humans are prone to pattern recognition, after all, so I’m likely to end up with a shortcut imitation of the process unless I study it closely. (A ton of formal fine art training is just learning to study things closely, to understand the original processes and develop better shortcuts.)
But seeing a new generative process is, in effect, introducing an alien thought pattern into my mind.