Short review: five of the best deep learning-related papers to read in depth in 2017, so if you haven't read them you can take action.
1. Coolest visuals: transforming between unpaired image sets using CycleGAN
essays：Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networksarxiv.org
Goal: Learn to convert between unpaired sets of images
The authors begin with two sets of images of different domains, such as horses and zebras, and learn two conversion nets: one converts horses to zebras, and the other does the opposite. Each conversion performs a style transformation, but not for an individual image style, but for an aggregated style of a set of images found in the network.
The conversion networks are trained as a pair of GenerativeAdversarialNetwork (GAN for short, a method of unsupervised learning that learns by having two neural networks play each other), each trying to trick the discriminator into believing that the 'converted' image is the real one. An additional 'cyclic consistency loss' is introduced to encourage the image to remain unchanged after two transformation networks (i.e., forward and backward).
The visuals in the paper are amazing, and it's highly recommended to check out GitHub for some other examples. I'm particularly interested in this one because, unlike many previous approaches, it learns to convert between unpaired image sets, opening the door to applications where matching image pairs may not exist. In addition, the code is very easy to use and experiment with, demonstrating the robustness of the approach and the quality of the implementation.
2. MOST ELEGANT: WASSERSTEIN DISTANCE, Better Neural Network Training
Thesis: Wasserstein GAN
Goal: Use better objective functions to train GANs more consistently
This paper proposes the use of a slightly different objective function for training generative resilience networks, and the newly proposed objective function is much more stable than standard GAN training because it avoids vanishing gradients during training:.