Generalized Negative Correlation Learning for Deep Ensembling

November 05, 2020 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 5.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, .gitmodules, CIFAR100, COWC, FASHION, IMAGENET, IMAGENETTE, LICENSE, Notes.md, PNEUMONIA, Readme.md, __init__.py, environment.yml, explore_results.ipynb, submodules

Authors Sebastian BuschjÀger, Lukas Pfahler, Katharina Morik arXiv ID 2011.02952 Category cs.LG: Machine Learning Cross-listed stat.ML Citations 20 Venue arXiv.org Repository https://github.com/sbuschjaeger/gncl ⭐ 4 Last Checked 1 month ago
Abstract
Ensemble algorithms offer state of the art performance in many machine learning applications. A common explanation for their excellent performance is due to the bias-variance decomposition of the mean squared error which shows that the algorithm's error can be decomposed into its bias and variance. Both quantities are often opposed to each other and ensembles offer an effective way to manage them as they reduce the variance through a diverse set of base learners while keeping the bias low at the same time. Even though there have been numerous works on decomposing other loss functions, the exact mathematical connection is rarely exploited explicitly for ensembling, but merely used as a guiding principle. In this paper, we formulate a generalized bias-variance decomposition for arbitrary twice differentiable loss functions and study it in the context of Deep Learning. We use this decomposition to derive a Generalized Negative Correlation Learning (GNCL) algorithm which offers explicit control over the ensemble's diversity and smoothly interpolates between the two extremes of independent training and the joint training of the ensemble. We show how GNCL encapsulates many previous works and discuss under which circumstances training of an ensemble of Neural Networks might fail and what ensembling method should be favored depending on the choice of the individual networks. We make our code publicly available under https://github.com/sbuschjaeger/gncl
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning