Deepdream + Densecap

Gene Kogan combined a Deepdream hallucination with an image-recognition-and-captioning algorithm so we can get labels for what the computer thinks that it sees. 

Since Deepdream image generation is like looking for shapes in clouds, the results tend to be full of bizarre juxtapositions. (And, in this case, the computer is obsessed with dogs.) While the two algorithms don’t line up exactly, the captions give a hint at the things that the other half of the process thinks it sees.

Gene points out that while these particular competing neural networks are humorous, they demonstrate misunderstandings that could be troubling in more serious applications. (Indeed, the current law-enforcement facial recognition systems likely have racial-biases due to the limitations of their training data.)

The European Union has been taking steps toward the concept of “right to explanation”, the idea that you should be able to have an explanation of how an algorithm evaluated you. As far as I’ve seen, AI researchers are generally positive about the idea: they want to know why things work too, after all. Perhaps someday soon, you’ll see an evaluation of your information similar to the Densecap video. Hopefully it’ll be more accurate.