Generate To Adapt: Aligning Domains using Generative Adversarial Networks

April 06, 2017 ยท Entered Twilight ยท ๐Ÿ› 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: CODEOWNERS, README.md, eval.py, main.py, models.py, trainer.py, utils.py

Authors Swami Sankaranarayanan, Yogesh Balaji, Carlos D. Castillo, Rama Chellappa arXiv ID 1704.01705 Category cs.CV: Computer Vision Citations 676 Venue 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Repository https://github.com/yogeshbalaji/Generate_To_Adapt โญ 143 Last Checked 1 month ago
Abstract
Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision