KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training

October 16, 2023 Β· Entered Twilight Β· πŸ› Neural Information Processing Systems

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, GradMatch, ImageNet, README.md, VisualAtom

Authors Truong Thao Nguyen, Balazs Gerofi, Edgar Josafat Martinez-Noriega, François Trahay, Mohamed Wahib arXiv ID 2310.10102 Category cs.DC: Distributed Computing Cross-listed cs.CV, cs.LG Citations 2 Venue Neural Information Processing Systems Repository https://github.com/TruongThaoNguyen/kakurenbo ⭐ 7 Last Checked 1 month ago
Abstract
This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of training. Using information about the loss and prediction confidence during training, we adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process, without significantly degrading accuracy. We explore the converge properties when accounting for the reduction in the number of SGD updates. Empirical results on various large-scale datasets and models used directly in image classification and segmentation show that while the with-replacement importance sampling algorithm performs poorly on large datasets, our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline. Code available at https://github.com/TruongThaoNguyen/kakurenbo
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing