Training neural audio classifiers with few data

October 24, 2018 Β· Entered Twilight Β· πŸ› IEEE International Conference on Acoustics, Speech, and Signal Processing

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 7.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, aux, data, requirements.txt, src

Authors Jordi Pons, Joan Serrà, Xavier Serra arXiv ID 1810.10274 Category cs.SD: Sound Cross-listed cs.AI, cs.LG, eess.AS Citations 66 Venue IEEE International Conference on Acoustics, Speech, and Signal Processing Repository https://github.com/jordipons/neural-classifiers-with-few-audio/ ⭐ 60 Last Checked 1 month ago
Abstract
We investigate supervised learning strategies that improve the training of neural network audio classifiers on small annotated collections. In particular, we study whether (i) a naive regularization of the solution space, (ii) prototypical networks, (iii) transfer learning, or (iv) their combination, can foster deep learning models to better leverage a small amount of training examples. To this end, we evaluate (i-iv) for the tasks of acoustic event recognition and acoustic scene classification, considering from 1 to 100 labeled examples per class. Results indicate that transfer learning is a powerful strategy in such scenarios, but prototypical networks show promising results when one does not count with external or validation data.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Sound