Guarding Barlow Twins Against Overfitting with Mixed Samples

December 04, 2023 ยท Entered Twilight ยท ๐Ÿ› Advanced Video and Signal Based Surveillance

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .DS_Store, .github, .gitignore, .gitmodules, LICENSE, README.md, augmentations, data_statistics.py, download_imagenet.sh, environment.yml, evaluate_imagenet.py, evaluate_transfer.py, figs, hubconf.py, linear.py, main.py, main_imagenet.py, model.py, preprocess_datasets, scripts-linear-resnet18, scripts-linear-resnet50, scripts-pretrain-resnet18, scripts-pretrain-resnet50, scripts-transfer-resnet18, setup.sh, ssl-sota, transfer_datasets, utils.py

Authors Wele Gedara Chaminda Bandara, Celso M. De Melo, Vishal M. Patel arXiv ID 2312.02151 Category cs.CV: Computer Vision Cross-listed cs.AI, cs.LG Citations 12 Venue Advanced Video and Signal Based Surveillance Repository https://github.com/wgcban/mix-bt.git โญ 19 Last Checked 1 month ago
Abstract
Self-supervised Learning (SSL) aims to learn transferable feature representations for downstream applications without relying on labeled data. The Barlow Twins algorithm, renowned for its widespread adoption and straightforward implementation compared to its counterparts like contrastive learning methods, minimizes feature redundancy while maximizing invariance to common corruptions. Optimizing for the above objective forces the network to learn useful representations, while avoiding noisy or constant features, resulting in improved downstream task performance with limited adaptation. Despite Barlow Twins' proven effectiveness in pre-training, the underlying SSL objective can inadvertently cause feature overfitting due to the lack of strong interaction between the samples unlike the contrastive learning approaches. From our experiments, we observe that optimizing for the Barlow Twins objective doesn't necessarily guarantee sustained improvements in representation quality beyond a certain pre-training phase, and can potentially degrade downstream performance on some datasets. To address this challenge, we introduce Mixed Barlow Twins, which aims to improve sample interaction during Barlow Twins training via linearly interpolated samples. This results in an additional regularization term to the original Barlow Twins objective, assuming linear interpolation in the input space translates to linearly interpolated features in the feature space. Pre-training with this regularization effectively mitigates feature overfitting and further enhances the downstream performance on CIFAR-10, CIFAR-100, TinyImageNet, STL-10, and ImageNet datasets. The code and checkpoints are available at: https://github.com/wgcban/mix-bt.git
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision