S2SD: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning

September 17, 2020 Β· Entered Twilight Β· πŸ› International Conference on Machine Learning

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 5.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, Sample_Runs, architectures, batchminer, criteria, datasampler, datasets, evaluation, images, main.py, metrics, parameters.py, utilities

Authors Karsten Roth, Timo Milbich, Bjârn Ommer, Joseph Paul Cohen, Marzyeh Ghassemi arXiv ID 2009.08348 Category cs.CV: Computer Vision Citations 16 Venue International Conference on Machine Learning Repository https://github.com/MLforHealth/S2SD ⭐ 44 Last Checked 1 month ago
Abstract
Deep Metric Learning (DML) provides a crucial tool for visual similarity and zero-shot applications by learning generalizing embedding spaces, although recent work in DML has shown strong performance saturation across training objectives. However, generalization capacity is known to scale with the embedding space dimensionality. Unfortunately, high dimensional embeddings also create higher retrieval cost for downstream applications. To remedy this, we propose \emph{Simultaneous Similarity-based Self-distillation (S2SD). S2SD extends DML with knowledge distillation from auxiliary, high-dimensional embedding and feature spaces to leverage complementary context during training while retaining test-time cost and with negligible changes to the training time. Experiments and ablations across different objectives and standard benchmarks show S2SD offers notable improvements of up to 7% in Recall@1, while also setting a new state-of-the-art. Code available at https://github.com/MLforHealth/S2SD.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computer Vision