SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning

October 11, 2018 ยท Entered Twilight ยท ๐Ÿ› Asian Conference on Computer Vision

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, data, download_datasets.sh, images, models, options, requirements.txt, scripts, split.py, test.py, train.py, util

Authors Xiaoming Yu, Xing Cai, Zhenqiang Ying, Thomas Li, Ge Li arXiv ID 1810.04991 Category cs.CV: Computer Vision Citations 40 Venue Asian Conference on Computer Vision Repository https://github.com/Xiaoming-Yu/SingleGAN โญ 83 Last Checked 1 month ago
Abstract
Image translation is a burgeoning field in computer vision where the goal is to learn the mapping between an input image and an output image. However, most recent methods require multiple generators for modeling different domain mappings, which are inefficient and ineffective on some multi-domain image translation tasks. In this paper, we propose a novel method, SingleGAN, to perform multi-domain image-to-image translations with a single generator. We introduce the domain code to explicitly control the different generative tasks and integrate multiple optimization goals to ensure the translation. Experimental results on several unpaired datasets show superior performance of our model in translation between two domains. Besides, we explore variants of SingleGAN for different tasks, including one-to-many domain translation, many-to-many domain translation and one-to-one domain translation with multimodality. The extended experiments show the universality and extensibility of our model.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision