Procedural Fireworks 2.0

I thought of an improvement to the fireworks that I made the other day, and I decided that the changes would be a good example of something I wanted to talk about: treating your generators as part of a larger process.

You see, the original fireworks were pure particles: each explosion, smoke puff, and trailing spark was a separate, simulated object. This kind of naive simulacrum is a pretty common approach, particularly when you’re feeling your way through the implementation and need more flexibility. But it isn’t always the best use of a generator.

In this case, I decided that all of those colored trails didn’t need to be individually tracked. They were adding hundreds of particles to each frame, but didn’t do much. So I added an image buffer. It’s just a hidden image the same size as the canvas that gets drawn to each frame by the falling sparks.

With no lifespan, I had to manage the fade-out another way: at the start of every frame, we keep the image from last time but draw a transparent black rectangle over the whole thing. Old colors gradually fade.

There are tradeoffs: the individual trails no longer have consistent color variation (though that could be re-implemented by pushing it up the hierarchy to the burst sparks. The lifespan fade-out is uniform, losing some of the subtle variation. There’s a usually-invisible after-image that doesn’t quite get cleaned up (though it easily could, with some more post-processing). The old particles no longer shrink (though I added a bright center to them, so the fade-out looks like they shrink a bit).

So it’s not an exact replica. It’s less flexible. But it looks almost the same and it’s faster. Now my computer can display many more fireworks without slowing down.

It’s better in some ways, worse in others. Every generator is going to involve trade-offs like this. And adding a new processing layer–the image buffer–opens up new doors for expanding the effects.

Which makes it a good example of one of the big points I’ve been trying to make. You don’t need to use the literal output of your algorithm.

Doing more processing on your output is a perfectly valid way to add more interest or find better performance–for example, by running the output of a neural net image stylization through an upscaler, so you only have to generate a quarter of the pixels that you would need to otherwise.

Generators can be literal, but they don’t have have to be.

https://www.openprocessing.org/sketch/437851