Ethical imperatives in AI and generative art

Procedural generation is about creating things that aren’t under your complete control, and Liza Daly has put some thought into the ethical implications of that. Drawing from the examples of the bot community, this essay extends the guidelines to cover all kinds of generative media.

Ethics have come up here a few times before, though mostly in the context of bots. This stuff is important to think about if you have any intention of releasing your generative projects into the wild.

Liza uses a number of examples I was previously unaware of, and puts forth three principles: anticipate deliberate misuse, consider how code is a powerful amplifier, and show your work.

I think that the importance of showing your work is the one that have been the most overlooked. Despite things like the EU’s right to explanation, most algorithms are still opaque. And, as Liza points out, most people have vast misconceptions about artificial intelligence.

How will you think about asking for an explanation when you don’t realize there’s a question? Can you tell what was made by a computer and what wasn’t?

image

I share Liza’s concerns about the implications of the intersection of all three guidelines. Your news streams are already being polluted by maliciously-spawned fabrications. I’ve pointed out many ways that images can be manipulated. Now picture all those clickbait articles automated.

A politician said something inflammatory: here’s a manipulated video, with the dialog remapped. A celebrity laughed at the solemn funeral or the racist joke: here’s a smile-vector-altered photo. Dial it up to 11, automate the entire pipeline, and flood YouTube with “proof” of all kinds of things.

People with unusual or unpopular problems will suffer more: people who don’t want to listen will have a new excuse to claim everything was Photoshopped, even when they can’t really tell the difference themselves. We believe evidence that supports our beliefs and discard evidence that makes us feel uncomfortable.

So, paradoxically, we’ll have people believing all kinds of things that fit their biases while rejecting things that are true but discomfiting. If you thought Flat Earthers and Moon-Landing denialists were bad, wait until they can claim that everything is manipulated while understanding none of the science behind it. Truth becomes what we already believe.

I share Liza’s belief that generative artists can help slow this. By exposing more people to how the systems actually work, countering abuses of the tools before they happen, and taking steps to deal with the ability of computers to scale problems up to massive levels, we can make our own corners of the world a little better.

https://worldwritable.com/ethical-imperatives-in-ai-and-generative-art-b8cf51af4c5#.v6w2sudgi