Tensor Programs II: Neural Tangent Kernel for Any Architecture
June 25, 2020 Β· Entered Twilight Β· π arXiv.org
"Last commit was 5.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, Batchnorm-NTK.ipynb, NTKdeviation.png, Plot.ipynb, README.md, RNN-NTK.ipynb, Transformer-NTK.ipynb, batchnorm_ntk.frob, colab, images, rnn_ap_ntk.frob, transformer_ntk.frob, utils.py
Authors
Greg Yang
arXiv ID
2006.14548
Category
stat.ML: Machine Learning (Stat)
Cross-listed
cond-mat.dis-nn,
cs.LG,
cs.NE
Citations
159
Venue
arXiv.org
Repository
https://github.com/thegregyang/NTK4A
β 110
Last Checked
1 month ago
Abstract
We prove that a randomly initialized neural network of *any architecture* has its Tangent Kernel (NTK) converge to a deterministic limit, as the network widths tend to infinity. We demonstrate how to calculate this limit. In prior literature, the heuristic study of neural network gradients often assumes every weight matrix used in forward propagation is independent from its transpose used in backpropagation (Schoenholz et al. 2017). This is known as the *gradient independence assumption (GIA)*. We identify a commonly satisfied condition, which we call *Simple GIA Check*, such that the NTK limit calculation based on GIA is correct. Conversely, when Simple GIA Check fails, we show GIA can result in wrong answers. Our material here presents the NTK results of Yang (2019a) in a friendly manner and showcases the *tensor programs* technique for understanding wide neural networks. We provide reference implementations of infinite-width NTKs of recurrent neural network, transformer, and batch normalization at https://github.com/thegregyang/NTK4A.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning (Stat)
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Distilling the Knowledge in a Neural Network
R.I.P.
π»
Ghosted
Layer Normalization
R.I.P.
π»
Ghosted
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
R.I.P.
π»
Ghosted
Domain-Adversarial Training of Neural Networks
R.I.P.
π»
Ghosted