When Humans are Parasites on Algorithms
You know, I’d hoped to completely forget about these things, but apparently they’ve gotten worse–or I just wasn’t fully exposed to the true horror.
If you’re not aware, writer and artist James Bridle recently wrote a post called Something is wrong on the internet.
Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.
I’d suggest reading it, though maybe don’t watch the linked videos. Which are comparatively mild examples.
It’s still not entirely clear to what extent the creation of these things is automated. Some involve real humans singing finger family songs to match bizzare combinations of keywords. Others are clearly either automated or produced in sweatshops–the dirty secret of AI is the extent to which a lot of it is just an update of the Mechanical Turk.
But the crucial point that I missed is that there is an obvious automated system behind all of this: YouTube’s algorithms. They’re not very smart, even as algorithms go. Though they’d be classified as AI, it’s more like the intelligence of a bacteria, or perhaps a very dim sort of worm, like the kind that crawls on a sidewalk after the rain and dies when the sun comes out.
This is an important point: when we say “Artificial Intelligence” we often think of a complex, almost magical intellect…but most of the real AI work is merely blindly optimizing for a particular goal. AlphaGo is very good at playing Go, and absolutely useless at anything else. The interesting part, for researchers, is figuring out how to transfer the principles behind the AI to solve other problems.
In YouTube’s case, the algorithm is blindly trying to find content that’s vaguely associated with the thing you just watched. As it turns out, when you combine it with human toddlers, that leads down a dark, dark hole as their uncritical viewing reinforces the algorithm. The wet sidewalk has way more water than the damp earth, the worm doesn’t realize the danger, we get a YouTube run by undead zombie worms showing children content they’ll never forget, much like this metaphor.
So we’ve created an economic ecosystem where humans are literally performing incantations to fulfill what they think an AI wants because an
interaction between an AI and babies has gotten stuck in a local maxima.
Mike Cook has a good response for this: Better Living Through Generativity:
Photography’s slow walk to ubiquity also had a darker side. As it became better-known, photography was understood to be a way to record real events, but this imperfect understanding enabled a lot of people to do fairly awful things with it. One of its first proponents faked his own death. People used it to prove the existence of fairies, to show that they could summon spirits, to verify the existence of monsters and mythical animals. People knew enough about photography to benefit from it, but not enough to protect themselves from it.
What solved this problem? Well, a bunch of things, but undoubtedly one of those things was helping people understand the processes by which these images were made, and giving them the power to make them. Cameras and development became more commonplace, people understood how to overlay images or touch them up after the fact. We see the same cycle today with Photoshop: first, it caused chaos; then people understood it; finally, people harnessed the power for themselves. Now we edit photos on our phones as we take them, before sharing them with others.
I see PROCJAM as part of an effort to enact this change for generative software. By bringing people the resources, the tools and the space to make generative systems, they can take ownership of the concept and understand their strengths and their weaknesses. Right now only a few hundred people enter PROCJAM, but ultimately we should all be working to make these ideas accessible and fun for people to try. In doing so, we popularise these ideas and rob them of some of their incapacitating power.
This won’t solve the problem of YouTube–in the end, Google is responsible for what their algorithm has done. But I do think that one of the best personal defenses against this is to learn what procedural generation and AI are really capable of and–more importantly–what they can’t do.
This stuff isn’t magic, even when it looks like it is.