R.I.P.
๐ป
Ghosted
The Lie Derivative for Measuring Learned Equivariance
October 06, 2022 ยท Entered Twilight ยท ๐ International Conference on Learning Representations
Repo contents: .gitignore, .gitmodules, LICENSE, README.md, assets, exps_e2e.py, exps_layerwise.py, lee, pytorch-image-models, requirements.txt, stylegan3, sweep_configs
Authors
Nate Gruver, Marc Finzi, Micah Goldblum, Andrew Gordon Wilson
arXiv ID
2210.02984
Category
cs.LG: Machine Learning
Cross-listed
cs.AI,
cs.CV,
stat.ML
Citations
57
Venue
International Conference on Learning Representations
Repository
https://github.com/ngruver/lie-deriv
โญ 40
Last Checked
1 month ago
Abstract
Equivariance guarantees that a model's predictions capture key symmetries in data. When an image is translated or rotated, an equivariant model's representation of that image will translate or rotate accordingly. The success of convolutional neural networks has historically been tied to translation equivariance directly encoded in their architecture. The rising success of vision transformers, which have no explicit architectural bias towards equivariance, challenges this narrative and suggests that augmentations and training data might also play a significant role in their performance. In order to better understand the role of equivariance in recent vision models, we introduce the Lie derivative, a method for measuring equivariance with strong mathematical foundations and minimal hyperparameters. Using the Lie derivative, we study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures. The scale of our analysis allows us to separate the impact of architecture from other factors like model size or training method. Surprisingly, we find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities, and that as models get larger and more accurate they tend to display more equivariance, regardless of architecture. For example, transformers can be more equivariant than convolutional neural networks after training.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted