Smoothgrad
This is the kind of machine learning technique that I want to see more of.
Under the hood, neural networks are complicated things, and it’s often difficult to see exactly why it made a particular decision. Smoothgrad, by Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg is a refinement of one method for making a sensitivity map.
A sensitivity mask (or feature importance map, or saliency mask) is a visualization of which pixels are important. Or, at least, which pixels the neural network considers to be important. Importance maps have been around for a while, and have been applied to neural networks before. What I think is interesting about this Smoothgrad approach is how human-comprehensible the results are. It’s much more obvious what the machine thinks its seeing and why it went wrong.
And a note for future artists using neural networks: just as edge detection is a useful tool in the visual effects pipeline, I expect that importance maps will be similarly useful.