Contrastive Learning for Unpaired Image-to-Image Translation
Taesung Park      Alexei A. Efros      Richard Zhang      Jun-Yan Zhu
UC Berkeley
Adobe Research
ECCV 2020 [paper | code | download video (1 min, 10 min) | talk slides ]



Abstract

In image translation settings, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain. We propose a straightforward method for doing so -- maximizing mutual information between the two, using a framework based on contrastive learning. The method encourages two elements (corresponding patches) to map to a similar point in a learned feature space, relative to other elements (other patches) in the dataset, referred to as negatives. We explore several critical design choices for making contrastive learning effective in the image synthesis setting. Notably, we use a multilayer, patch-based approach, rather than operate on entire images. Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset. We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time. In addition, our method can even be extended to the training setting where each "domain" is only a single image.


Video (1 min)

Download video

Video (10 min)

Download video

Example Results

Paris to Burano Streets

Russian Blue -> Grumpy Cats

Single-Image Translation


Try our code

[GitHub]


Paper

T. Park, A. A. Efros, R. Zhang, J.Y. Zhu.
Contrastive Learning for Unpaired
Image-to-Image Translation.

In ECCV, 2020.

[Bibtex]


Acknowledgements

We thank Allan Jabri and Phillip Isola for helpful discussion and feedback. Taesung Park is supported by a Samsung Scholarship and an Adobe Research Fellowship, and some of this work was done as an Adobe Research intern. This work was partially supported by NSF grant IIS-1633310, grant from SAP, and gifts from Berkeley DeepDrive and Adobe.