SGDR: Stochastic Gradient Descent with Warm Restarts

August 13, 2016 ยท Entered Twilight ยท ๐Ÿ› International Conference on Learning Representations

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 9.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, SGDR_WRNs.py

Authors Ilya Loshchilov, Frank Hutter arXiv ID 1608.03983 Category cs.LG: Machine Learning Cross-listed cs.NE, math.OC Citations 9.8K Venue International Conference on Learning Representations Repository https://github.com/loshchil/SGDR โญ 255 Last Checked 1 month ago
Abstract
Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning