R.I.P.
๐ป
Ghosted
SGDR: Stochastic Gradient Descent with Warm Restarts
August 13, 2016 ยท Entered Twilight ยท ๐ International Conference on Learning Representations
"Last commit was 9.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: README.md, SGDR_WRNs.py
Authors
Ilya Loshchilov, Frank Hutter
arXiv ID
1608.03983
Category
cs.LG: Machine Learning
Cross-listed
cs.NE,
math.OC
Citations
9.8K
Venue
International Conference on Learning Representations
Repository
https://github.com/loshchil/SGDR
โญ 255
Last Checked
1 month ago
Abstract
Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted