EAdam Optimizer: How $ฮต$ Impact Adam

November 04, 2020 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: EAdam.py, README.md, cifar-classification, mmdetection-master, nlp, results

Authors Wei Yuan, Kai-Xin Gao arXiv ID 2011.02150 Category cs.LG: Machine Learning Cross-listed stat.ML Citations 20 Venue arXiv.org Repository https://github.com/yuanwei2019/EAdam-optimizer โญ 28 Last Checked 1 month ago
Abstract
Many adaptive optimization methods have been proposed and used in deep learning, in which Adam is regarded as the default algorithm and widely used in many deep learning frameworks. Recently, many variants of Adam, such as Adabound, RAdam and Adabelief, have been proposed and show better performance than Adam. However, these variants mainly focus on changing the stepsize by making differences on the gradient or the square of it. Motivated by the fact that suitable damping is important for the success of powerful second-order optimizers, we discuss the impact of the constant $ฮต$ for Adam in this paper. Surprisingly, we can obtain better performance than Adam simply changing the position of $ฮต$. Based on this finding, we propose a new variant of Adam called EAdam, which doesn't need extra hyper-parameters or computational costs. We also discuss the relationships and differences between our method and Adam. Finally, we conduct extensive experiments on various popular tasks and models. Experimental results show that our method can bring significant improvement compared with Adam. Our code is available at https://github.com/yuanwei2019/EAdam-optimizer.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning