Improving Unsupervised Domain Adaptation by Reducing Bi-level Feature Redundancy

December 28, 2020 · Declared Dead · 🏛 arXiv.org

⚰️ CAUSE OF DEATH: The Empty Tomb
GitHub repo is empty
Authors Mengzhu Wang, Xiang Zhang, Long Lan, Wei Wang, Huibin Tan, Zhigang Luo arXiv ID 2012.15732 Category cs.LG: Machine Learning Citations 2 Venue arXiv.org Repository https://github.com/dreamkily/gUDA Last Checked 1 month ago
Abstract
Reducing feature redundancy has shown beneficial effects for improving the accuracy of deep learning models, thus it is also indispensable for the models of unsupervised domain adaptation (UDA). Nevertheless, most recent efforts in the field of UDA ignores this point. Moreover, main schemes realizing this in general independent of UDA purely involve a single domain, thus might not be effective for cross-domain tasks. In this paper, we emphasize the significance of reducing feature redundancy for improving UDA in a bi-level way. For the first level, we try to ensure compact domain-specific features with a transferable decorrelated normalization module, which preserves specific domain information whilst easing the side effect of feature redundancy on the sequel domain-invariance. In the second level, domain-invariant feature redundancy caused by domain-shared representation is further mitigated via an alternative brand orthogonality for better generalization. These two novel aspects can be easily plugged into any BN-based backbone neural networks. Specifically, simply applying them to ResNet50 has achieved competitive performance to the state-of-the-arts on five popular benchmarks. Our code will be available at https://github.com/dreamkily/gUDA.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

📜 Similar Papers

In the same crypt — Machine Learning

Died the same way — ⚰️ The Empty Tomb