Generative Models from the perspective of Continual Learning

December 21, 2018 Β· Declared Dead Β· πŸ› IEEE International Joint Conference on Neural Network

πŸ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors TimothΓ©e Lesort, Hugo Caselles-DuprΓ©, Michael Garcia-Ortiz, Andrei Stoian, David Filliat arXiv ID 1812.09111 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CV Citations 167 Venue IEEE International Joint Conference on Neural Network Repository https://github.com/TLESORT/Generative\_Continual\_Learning}} Last Checked 1 month ago
Abstract
Which generative model is the most suitable for Continual Learning? This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning (MNIST, Fashion MNIST and CIFAR10). We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. Our code is available online \footnote{\url{https://github.com/TLESORT/Generative\_Continual\_Learning}}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning

Died the same way β€” πŸ’€ 404 Not Found