Towards Difficulty-Agnostic Efficient Transfer Learning for Vision-Language Models

November 27, 2023 ยท Entered Twilight ยท ๐Ÿ› Conference on Empirical Methods in Natural Language Processing

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, README.md, clip, clip_coca, clip_eva, clip_words.csv, configs, datasets, docs, interpret_prompts, lpclip, parse_test_res.py, requirements.txt, scripts, train.py, trainers, utils.py

Authors Yongjin Yang, Jongwoo Ko, Se-Young Yun arXiv ID 2311.15569 Category cs.CV: Computer Vision Cross-listed cs.AI Citations 1 Venue Conference on Empirical Methods in Natural Language Processing Repository https://github.com/YangYongJin/APEX โญ 8 Last Checked 1 month ago
Abstract
Vision-language models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image classification. Recently, the use of prompts or adapters for efficient transfer learning (ETL) has gained significant attention for effectively adapting to downstream tasks. However, previous studies have overlooked the challenge of varying transfer difficulty of downstream tasks. In this paper, we empirically analyze how each ETL method behaves with respect to transfer difficulty. Our observations indicate that utilizing vision prompts and text adapters is crucial for adaptability and generalizability in domains with high difficulty. Also, by applying an adaptive ensemble approach that integrates task-adapted VLMs with pre-trained VLMs and strategically leverages more general knowledge in low-difficulty and less in high-difficulty domains, we consistently enhance performance across both types of domains. Based on these observations, we propose an adaptive ensemble method that combines visual prompts and text adapters with pre-trained VLMs, tailored by transfer difficulty, to achieve optimal performance for any target domain. Upon experimenting with extensive benchmarks, our method consistently outperforms all baselines, particularly on unseen tasks, demonstrating its effectiveness.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision