Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines

October 30, 2018 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, agents, dataloaders, fig, iBatchLearn.py, models, modules, requirements.txt, scripts, utils

Authors Yen-Chang Hsu, Yen-Cheng Liu, Anita Ramasamy, Zsolt Kira arXiv ID 1810.12488 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CV Citations 387 Venue arXiv.org Repository https://github.com/GT-RIPL/Continual-Learning-Benchmark โญ 524 Last Checked 1 month ago
Abstract
Continual learning has received a great deal of attention recently with several approaches being proposed. However, evaluations involve a diverse set of scenarios making meaningful comparison difficult. This work provides a systematic categorization of the scenarios and evaluates them within a consistent framework including strong baselines and state-of-the-art methods. The results provide an understanding of the relative difficulty of the scenarios and that simple baselines (Adagrad, L2 regularization, and naive rehearsal strategies) can surprisingly achieve similar performance to current mainstream methods. We conclude with several suggestions for creating harder evaluation scenarios and future research directions. The code is available at https://github.com/GT-RIPL/Continual-Learning-Benchmark
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning