Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models

December 01, 2022 ยท Declared Dead ยท ๐Ÿ› Conference of the European Chapter of the Association for Computational Linguistics

๐Ÿฆด CAUSE OF DEATH: Skeleton Repo
Boilerplate only, no real code

Repo contents: .gitmodules, CLIP, OFA, README.md, detectron2, mae, moco-v3, segmenter, vlp_probe

Authors Zhuowan Li, Cihang Xie, Benjamin Van Durme, Alan Yuille arXiv ID 2212.00281 Category cs.CV: Computer Vision Cross-listed cs.CL Citations 2 Venue Conference of the European Chapter of the Association for Computational Linguistics Repository https://github.com/Lizw14/visual_probing โญ 1 Last Checked 1 month ago
Abstract
Despite the impressive advancements achieved through vision-and-language pretraining, it remains unclear whether this joint learning paradigm can help understand each individual modality. In this work, we conduct a comparative analysis of the visual representations in existing vision-and-language models and vision-only models by probing a broad range of tasks, aiming to assess the quality of the learned representations in a nuanced manner. Interestingly, our empirical observations suggest that vision-and-language models are better at label prediction tasks like object and attribute prediction, while vision-only models are stronger at dense prediction tasks that require more localized information. We hope our study sheds light on the role of language in visual learning, and serves as an empirical guide for various pretrained models. Code will be released at https://github.com/Lizw14/visual_probing
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision

Died the same way โ€” ๐Ÿฆด Skeleton Repo

R.I.P. ๐Ÿฆด Skeleton Repo

Neural Style Transfer: A Review

Yongcheng Jing, Yezhou Yang, ... (+4 more)

cs.CV ๐Ÿ› IEEE TVCG ๐Ÿ“š 828 cites 8 years ago