SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability
June 19, 2017 Β· Entered Twilight Β· π arXiv.org
"Last commit was 7.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: CONTRIBUTING.md, LICENSE, README.md, cca_core.py, dft_ccas.py, examples, numpy_pca.py, numpy_pls.py, pwcca.py, tutorials
Authors
Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein
arXiv ID
1706.05806
Category
stat.ML: Machine Learning (Stat)
Cross-listed
cs.LG
Citations
34
Venue
arXiv.org
Repository
https://github.com/google/svcca/
β 643
Last Checked
1 month ago
Abstract
We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods). We deploy this tool to measure the intrinsic dimensionality of layers, showing in some cases needless over-parameterization; to probe learning dynamics throughout training, finding that networks converge to final representations from the bottom up; to show where class-specific information in networks is formed; and to suggest new training regimes that simultaneously save computation and overfit less. Code: https://github.com/google/svcca/
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning (Stat)
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Distilling the Knowledge in a Neural Network
R.I.P.
π»
Ghosted
Layer Normalization
R.I.P.
π»
Ghosted
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
R.I.P.
π»
Ghosted
Domain-Adversarial Training of Neural Networks
R.I.P.
π»
Ghosted