Procedurally-Generated Processing Fireworks

Here’s some Processing fireworks for you, complete with source code. You can see the live version over on OpenProcessing.org, where you can download it or view the source code.

It’s open source, under a Creative Commons SA license, so go make your own modifications. Add a new movement type, or some different kinds of particles, or just tweak the numbers and see what happens. I added a ton of comments, so hopefully people of all experience levels will find something useful.




CycleGAN two model feedback loop

See, a thing I get really excited about with neural nets (and other forms of procgen) is when they output something that we couldn’t have gotten any other way. No matter how bizzare the output, what we’re seeing is something new and alien.

Lately, Mario Klingemann has been experimenting with feeding neural nets (Mostly CycleGAN) into each other. The published results have that alien quality I value: mostly coherent but in a way that no human would have chosen, let alone manually drawn.

Two uses of generative art: bringing to life something that I can picture but would take far too long to do by hand, and creating something I that I would never have been able to imagine on my own.

There’s some overlap here: making a convincing forest, for example, requires a lot of little details that would take a lot of effort for me to imagine by hand. Like most humans I am bad at randomization, and the little details of moss and undergrowth can sometimes be done more effectively with a good generator that mimics the natural generation. But those aren’t quite as abruptly startling as the images that are literally impossible for me to have pictured before I saw them.

These videos are examples of both of these uses: the process would have taken far too long to do by hand, and the nature of the process is one that I wouldn’t have come up with without first seeing it here.

Of course, as we become familiar with the imagery it becomes part of our thinking. Maybe not the exact process–humans are prone to pattern recognition, after all, so I’m likely to end up with a shortcut imitation of the process unless I study it closely. (A ton of formal fine art training is just learning to study things closely, to understand the original processes and develop better shortcuts.)

But seeing a new generative process is, in effect, introducing an alien thought pattern into my mind.






The snarXiv

The arXiv is a site that collects preprints of scientific papers. In some fields it’s the major archive for the vast majority of the papers…but since the papers are mostly pre-publication and not yet peer-reviewed there’s also a vast variation in quality. Combined with the difficulty of understanding, say, theoretical high-energy physics if that’s not your field, and you get some confusing paper titles.

The snarXiv, on the other hand, is, well:

The snarXiv is a random high-energy theory paper generator incorporating all the latest trends, entropic reasoning, and exciting moduli spaces.  The arXiv is similar, but occasionally less random.

It uses a context free grammar to generate paper titles and abstracts.

I like the list of things you can do with it:

Suggested Uses for the snarXiv[3]

1. If you’re a graduate student, gloomily read through the abstracts, thinking to yourself that you don’t understand papers on the real arXiv any better.

2. If you’re a post-doc, reload until you find something to work on.

3. If you’re a professor, get really excited when a paper claims to solve the hierarchy problem, the little hierarchy problem, the mu problem, and the confinement problem.  Then experience profound disappointment.

4. If you’re a famous physicist, keep reloading until you see your name on something, then claim credit for it.

5. Everyone else should play arXiv vs. snarXiv.[4]

That last item, arXiv vs. snarXiv, shows you two paper titles and invites you to guess which one is real. The results are illuminating.

The snarXiv is far from the first scientific paper generator, and probably won’t be the last. Since the snarXiv is presently limited to high-energy physics, there are a great many other papers that are waiting for their own generators.

http://snarxiv.org/

Blogpost about how it works: http://davidsd.org/2010/03/the-snarxiv/




Live Streaming Procedural Music

I need to talk about this today because the composer, “divenorth” is only planning to leave the stream up for today. Though the code for a version of it is on GitHub.

Procedural generation is, of course, a great thing to live-stream. An automated live performance has a different set of priorities than interactive generation: entertaining an audience vs. entertaining a player. Viewers have more time to notice the failures and repetitions. It’s to the Moonlight generator’s credit that it manages to stay novel enough to keep sounding interesting, for me at least.

An experienced composer might be able to detect the seams–I’d like to hear about it if that’s the case–but to my mostly untrained ear it manages a remarkable job of continual generation.

https://www.twitch.tv/divenorth

UPDATE: for future reference, here’s an earlier recording of the generator in action.







Removing bias from word vectors

I think it’s important to remember that algorithms are not neutral, objective truths. This is especially true when they’re trained on unfiltered public data. So this writeup about removing bias from the ConceptNet Numberbach word vector dataset is compelling on both a practical and theoretical level.

For the practical, they have word vector data that has measurably less gender bias embedded in its gender analogies. For the theoretical, they discuss some methods that can be applied to other kinds of machine learning, and link to more research.

https://blog.conceptnet.io/2017/04/24/conceptnet-numberbatch-17-04-better-less-stereotyped-word-vectors/













Inspirobot.me

I have no idea how Inspirobot is generating it’s seemingly endless depth of inspirational memes. Templating? Markov chains? Neural network? Tons and tons of hand-edited entries?

If I had to guess, I’d say that a template seems the most likely, based on where the cohesion breaks down: it seems to have a good grasp on the grammar of the inspirational image, which most Markov chains are bad at. And a character-based neural net would likely make up some words on occasion, unless it had much more training data and lower entropy than this seems to.

If I had to implement something like this, I might try a meaning-swerving approach, using something like ConceptNet or WordNik to find substitutes for words in inspirational phrases, but a massive Tracery grammar might be faster to implement. And, of course, you can mix and match some of these to get the input data for the templates.

If anyone knows how it actually works, do let me know.

Meanwhile, I think I’ve found a new motto for the blog…

image

http://inspirobot.me/








Microscale

I continue to be fascinated by using outside patterns to provide structure to generative processes. This week’s fascinating example of unusual inputs is Microscale, by Ales Tsurko, and it uses Wikipedia to make music.

Notably, rather than a naive 1-to-1 mapping, it treats the articles as step sequencers, switching the steps on and off according to a regular expression (which is also the track title). The artist explains the intent:

The concept behind microscale is to show that through transforming one media (text) into another media (music), the meaning can be transformed – the article has its own meaning, but the music generated from the article has a completely different meaning. 

This is, I think, one of the strengths of generativity: presenting familiar things in a new light, showing hidden patterns, and giving us a new way to find meaning in the ordinary.

It’s also flexible: the web version is hackable (for starters, enter your own track title) and the source code is available.

As with most audio projects, you really have to listen to it to get the full effect:

http://alestsurko.by/microscale/



lewisandquark:

What happens when really old advice meets really new technology?

A recurrent neural network (like the open-source char-rnn framework used here) can teach itself to imitate recipes, paint colors, band names, and even guinea pig names. By examining a dataset, it learns to formulate its own rules about it, and can use these rules to generate new text that - according to the neural network - resembles the dataset. But since the neural network is doing all this without cultural context, or any knowledge of what the words really mean, the results are often a bit bizarre.

In this example, the dataset is a list of more than 2000 ancient proverbs, collected by reader Anthony Mandelli. Some of these are well-known, such as “You can lead a horse to water, but you can’t make it drink.” and “Where there’s a will, there’s a way.” Others are frankly a bit strange: “Where there’s muck there’s brass.” and “A curst cow has short horns.” and “Be not a baker if your head is made of butter.”

What will a neural network make of this ancient wisdom?

If you answered “Really really weird proverbs”, you are correct.

A fox smells it better than a fool’s for a day.
No songer in a teacuper.
A fool in a teacup is a silent for a needle in the sale.
No man is the better pan on the hunder.
A mouse is a good bound to receive.
Do not come to the cow.

Some of them almost make sense:

A good wine makes the best sermon.
A good fear is never known till needed.
Death when it comes will have no sheep.
An ounce of the heart comes without an exception.
A good face is a letter to get out of the fire.
No wise man ever wishes to be sick.
A good excuse is as good as a rest.
There is no smoke without the best sin.
A good man is worth doing well.
A good anvil does not make the most noise.

While others would be more difficult to pass off as real proverbs:

We can serve no smort.
A good face is a letter like a dog.
A good earse makes a good ending.
Gnow will not go out.
Ung.
A fox smeep is the horse of the best sermon.
No sweet is half the barn door after the cat.
There is not fire and step on your dog and stains the best sermon.
An ox is a new dogn not sing in a haystar.

One of the oddest things to emerge from the proverb-trained neural network is a strange obsession with oxen. I checked, and there were only three oxen-related proverbs in the dataset, yet they appear frequently in the neural network’s version, and usually as rather powerful creatures.

An ox can lever an enemies are dangerous and restens at home.
An ox is not to be given with a single stone.
An ox is never known till needed.
An ox is as good as a best.
An ox is not to be that wound is hot.
An ox is a silent for the gain of the bush.
An ox is not fill when he will eat forever.

Whatever the internal mythos the neural network has learned from these ancient proverbs, oxen are mysteriously important.

Neural Network Proverbs

Janelle Shane has been doing a lot of fun experiments with using neural nets to generate things. Like these proverbs.

(It still astonishes me on some level that this produces recognizable English so  readily, even though I know more-or-less how it works. The training process is fascinating to watch, whether on sports teams names or on my own image-training tests)

2000 source proverbs, of course, is still a fairly small number when it comes to training a neural net, which explains some of the anomalies. (Janelle recommends a minimum of 1000 examples; even ~15000 Magic: the Gathering cards is a bit low compared to, say, the 14 million images in the ImageNet dataset.) But it’s obviously enough to produce some interesting results.

http://lewisandquark.tumblr.com/post/162097037117/ancient-wisdom-from-the-neural-network







Dancing Markovs

These are some fun little projects by Tero Parviainen: animated visualizations of musical Markov chains of the music that the Markov chains are generating.

If you’ve wondered just how Markov chains work, this is a good way to learn about them: each node represents a note, and it has arrows pointing to the other notes that could be played next. After each note, it chooses randomly between the arrows. That’s the simple concept that’s behind every Markov chain, but they’re not usually visualized as plainly as this.

Dancing Markov Gymnopédies: https://codepen.io/teropa/pen/bRqYVj

Dancing Markovs: Play it yourself edition: https://codepen.io/teropa/full/JJNKxW/




PROCJAM is looking to commission artists - deadline in 12 hours!

Because of the very successful kickstarter, ProcJam is able to fund another art pack. And they’re looking for artists that they can pay to contribute. (And they’re looking for beginners and experts, so you have no reason not to apply right now.)

Apply yourself, and let all your artist friends know!

http://www.procjam.com/2017/06/06/procjam-2017-call-for-artists/