Contextualizing Enhances Gradient Based Meta Learning

July 17, 2020 ยท Declared Dead ยท ๐Ÿ› IEEE Conference on High Performance Extreme Computing

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Evan Vogelbaum, Rumen Dangovski, Li Jing, Marin Soljaฤiฤ‡ arXiv ID 2007.10143 Category cs.LG: Machine Learning Cross-listed cs.CV, cs.NE, stat.ML Citations 3 Venue IEEE Conference on High Performance Extreme Computing Repository https://github.com/naveace/proto-context Last Checked 2 months ago
Abstract
Meta learning methods have found success when applied to few shot classification problems, in which they quickly adapt to a small number of labeled examples. Prototypical representations, each representing a particular class, have been of particular importance in this setting, as they provide a compact form to convey information learned from the labeled examples. However, these prototypes are just one method of representing this information, and they are narrow in their scope and ability to classify unseen examples. We propose the implementation of contextualizers, which are generalizable prototypes that adapt to given examples and play a larger role in classification for gradient-based models. We demonstrate how to equip meta learning methods with contextualizers and show that their use can significantly boost performance on a range of few shot learning datasets. We also present figures of merit demonstrating the potential benefits of contextualizers, along with analysis of how models make use of them. Our approach is particularly apt for low-data environments where it is difficult to update parameters without overfitting. Our implementation and instructions to reproduce the experiments are available at https://github.com/naveace/proto-context.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ’€ 404 Not Found