MineGAN: effective knowledge transfer from GANs to target domains with few images

December 11, 2019 ยท Entered Twilight ยท ๐Ÿ› Computer Vision and Pattern Recognition

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: BigGAN.py, BigGANdeep.py, I128_inception_moments.npz, LICENSE, MNISTtf, README.md, TFHub, __pycache__, animal_hash.py, calculate_inception_moments.py, datasets.py, imgs, inception_tf13.py, inception_utils.py, layers.py, logs, losses.py, make_hdf5.py, open?id=1nAle7FCVFZdix2--ks0r5JBkFnKw8ctW, sample.py, scripts, styleGAN, styleGANv2, sync_batchnorm, train.py, train_fns.py, utils.py

Authors Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, Joost van de Weijer arXiv ID 1912.05270 Category cs.CV: Computer Vision Citations 204 Venue Computer Vision and Pattern Recognition Repository https://github.com/yaxingwang/MineGAN โญ 148 Last Checked 1 month ago
Abstract
One of the attractive characteristics of deep neural networks is their ability to transfer knowledge obtained in one domain to other related domains. As a result, high-quality networks can be trained in domains with relatively little training data. This property has been extensively studied for discriminative networks but has received significantly less attention for generative models. Given the often enormous effort required to train GANs, both computationally as well as in the dataset collection, the re-use of pretrained GANs is a desirable objective. We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs. This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain. Mining effectively steers GAN sampling towards suitable regions of the latent space, which facilitates the posterior finetuning and avoids pathologies of other methods such as mode collapse and lack of flexibility. We perform experiments on several complex datasets using various GAN architectures (BigGAN, Progressive GAN) and show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods. In addition, MineGAN can successfully transfer knowledge from multiple pretrained GANs. Our code is available at: https://github.com/yaxingwang/MineGAN.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision