Boosting vision transformers for image retrieval

October 21, 2022 ยท Declared Dead ยท ๐Ÿ› IEEE Workshop/Winter Conference on Applications of Computer Vision

๐Ÿ“œ CAUSE OF DEATH: Death by README
Repo has only a README

Repo contents: LICENSE, README.md

Authors Chull Hwan Song, Jooyoung Yoon, Shunghyun Choi, Yannis Avrithis arXiv ID 2210.11909 Category cs.CV: Computer Vision Cross-listed cs.IR, cs.LG Citations 41 Venue IEEE Workshop/Winter Conference on Applications of Computer Vision Repository https://github.com/dealicious-inc/DToP โญ 37 Last Checked 1 month ago
Abstract
Vision transformers have achieved remarkable progress in vision tasks such as image classification and detection. However, in instance-level image retrieval, transformers have not yet shown good performance compared to convolutional networks. We propose a number of improvements that make transformers outperform the state of the art for the first time. (1) We show that a hybrid architecture is more effective than plain transformers, by a large margin. (2) We introduce two branches collecting global (classification token) and local (patch tokens) information, from which we form a global image representation. (3) In each branch, we collect multi-layer features from the transformer encoder, corresponding to skip connections across distant layers. (4) We enhance locality of interactions at the deeper layers of the encoder, which is the relative weakness of vision transformers. We train our model on all commonly used training sets and, for the first time, we make fair comparisons separately per training set. In all cases, we outperform previous models based on global representation. Public code is available at https://github.com/dealicious-inc/DToP.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision

Died the same way โ€” ๐Ÿ“œ Death by README