“openFrameworks is an open source C++ toolkit for creative coding.”
It’s open source and community-developed, and the 0.9.0 version added the ability to display in your browser. It’s been used on some pretty neat projects, too: projection mapping, art installations, music visualizers, 3d printed dress folding, video games…
A Deep Learning Framework for Character Motion Synthesis
I don’t think that the human-machine collaboration that I’ve been talking about is limited to one form of collaboration. It’s a continually-renegotiated partnership.
Motion-capture, like rotoscoping before it, was heralded as the end of the animator, letting the cheap machines replace the expensive human labor. Reality turned out to be considerably more complicated. Motion capture can’t choose to include beautifulsmears, for example. The artistic input is still necessary.
I think that while motion synthesis will replace some grunt inbetweening work, it also opens up the flexibility for animators to exert an immense amount of control over both motion-captured data and hand-animated performances, seamlessly combining them in whatever way makes the most sense.
Shan shui is a style of traditional Chinese landscape painting. As part of an M.F.A. thesis project, the artist SHI Weili has applied generative techniques to produce landscapes generated from arbitrary geographical locations. In the case of the pictured scroll, Manhattan is transubstantiated into an idyllic mountain scene.
There’s a lot going on here: it’s a very traditional art style, but generated computationally. Instead of being about a peaceful place far away, it translates the local place instead. The generation was technological, but the final scrolls were signed manually, with seals and red ink.
In the artist’s words, it “underscores the contrast between the artificial world and nature” and gives the audience a way to look inward and see the shan shui of their present place.
The generation was built with openFrameworks and data about the buildings of Manhattan.
Create 3D Scenes -urban environments, green spaces, interiors…- and other elements with its unique visual node-based language and integrate your creations directly into your game engine.
Results can be generated at design-time or runtime-time through the API. With this API you can also develop your own nodes.
Trials are available at www.sceelix.com for anyone interested in having a go.
Sceelix
Speaking of tools, Francisco RA sent me this information about a 3D procedural engine. I’ve played around with it a little; the node-based interface is fairly easy to pick up, though I’d like to be able to orbit the camera more easily.
In 2015, Andrew Sorensen (previously) gave this talk that’s simultaneously a walk through a history of Western musical theory and a live coding performance.
The software he’s using is Extempore, a language and envionment designed to support “cyberphysical programming” – it’s designed to support a “human programmer operating as an active agent in a real-time distributed network of environmentally aware systems.”
This kind of real-time programming, interacting with the running system, is exactly the kind of human-machine collaboration that I expect to see in the future.
Strangethink released a new project last night. While it shares some stylistic similarities with the art galleries of Secret Habitat–its aggressive hostility to the player’s presence and televisions reminds me of a more hostile Mystery Tapes.
The EGA-ish glitchy colors are a bit less inviting than the pure CGA aesthetic. Darkness oozes out of the televisions, their flickering screens desperately trying to tell us about Them. They want things. This is Their space.
The black doors in the galleries open into other dimensions, each filled with a new procedurally-generated island of art galleries, each trying to tell us a message about what They are like.
Don’t come expecting to find answers: the end of the game exists outside its logic, implemented in the metaphysical realm as the halting problem. Winning the game requires you to find the glitch that They can’t comprehend.
This week, Patreon had a data breech. Hopefully this won’t harm the crowd-patronage of the artists and creators too severely; the funding model has been doing great things for the creative community and letting people have a steady income producing things that are hard to monetize in the 21st century internet.
Where my interest comes in, though, is that this problem could have been avoided if they had used procedural generation. This isn’t my crazy idea: this is the recommendation from security expert Troy Hunt, who runs“have i been pwned?” a website that gives users a reliable and trustworthy way to discover if their information was stolen in a particular hack.
Based on what we know so far, it looks like Patreon was apparently using the actual data from their site on their test server. Having a lot of data that looks like your real data is vital for testing how software is going to behave under load conditions, but using the real data is a bad idea.
The answer? Generate fake data!
Some developers write scripts themselves to create fake data, while others use products like SQL Data Generator. Using procedurally generated data means there’s no privacy or security risk if the information is stolen, and it allows the developers to test how the system will behave with millions of users before they need to do it for real.
Though I usually focus on the artistic uses of procedural generation, there are also practical applications, like this one. And there are probably many more uses that are yet to be discovered.
(If you suspect your account might have been involved in a hacked data breech, I recommended checking out https://haveibeenpwned.com/It is safe to use, as it only stores user names and email addresses, not passwords or other data.)
A livecoding performance by Andrew Sorensen using Impromptu. The first two minutes are silent as the foundation is written, and then the music starts…
I find livecoding fascinating as an example of artistic centauristic collaboration between artist and machine. Not only is the music being created generativity, but the programmer is coding it live. All the steps along the way are part of the performance.
From ProcJam 2014, a short talk by Fernando Ramallo (who is the co-designer for Panoramical, an exploration of procedurally generated musical landscapes).
His focus is on using procedural generation as a tool for discovery. We can use it to unearth ideas that would never have occurred to us on our own. He also points out that the flexibility of procedural generation means that you can let the player interact with the generator.
The video is dense with ideas: there’s lot’s more in there. Go watch it!
I think that tools like this are going to be relatively commonplace in the future.
They won’t replace artists entirely, of course. If you need a specific chair, you’ll still need to make it by hand. And of course someone needs to create the input data, and to choose how to use the output data (though admittedly that will sometimes be another AI). But you will be able to turn a few dozen chair designs into hundreds. Today, even the biggest games can only afford to have a limited degree of variation in their assets; mixing in procedural generation to this extent brings it closer to the variation we see in the real-world.
Of course, that brings us to the question of if you want to have every chair be unique.
Already, game design and level design is dealing with this: where Thief: The Dark Project has non-interactive doors be an obvious flat texture without a handle, and the original Doom had a finite set of memorizable interactive objects, more photorealistic games are forced to find other ways to differentiate the interactive scenery.
The paper has been cited quite often, and some of those results look promising. And, given the recent rapid innovation in image synthesis, I expect that model synthesis has some very promising paths to explore. So be on the lookout for more tools that let you use procedural generation in your pipeline.