Neural Enhance

You’ve seen it on TV: magically taking a small, blurry image and enhancing it so you can see the details. Now it exists.

Implemented by Alex Champandard and inspired by several other people’s research into using neural networks to recover details, Neural Enhance appears magical. Of course, the details it adds don’t actually exist in the original photo: it’s just hallucinating the most likely pixel values based on what it sees.

There’s an online tool for viewing enhanced images that you can upload your own photos to.

Neural Enhance has already gotten a lot of attention, despite only being online for a short while. Which makes sense: even in its current state it’s a powerful tool. I anticipate that it’ll eventually become a common part of the pipeline for postprocessing the result of other algorithms. Why generate your image at 4K when you can generate it at a quarter of the size and upscale it with Neural Enhance?

Of course, the algorithm isn’t perfect. Since it’s making up the details it adds, it can occasionally get things very wrong. The better (and more constrained) the training data, the better the result. I can easily see it a custom dataset being built for, say, post-processing smoke simulations. All of the fiddly little details in a fraction of the time. Alex Champandard has already tested it on videogame screenshots, which work really well.