Generative Pre-Training for Speech with Autoregressive Predictive Coding

October 23, 2019 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Acoustics, Speech, and Signal Processing

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, apc_model.py, datasets.py, load_pretrained_model.py, prepare_data.py, train_apc.py, utils.py

Authors Yu-An Chung, James Glass arXiv ID 1910.12607 Category eess.AS: Audio & Speech Cross-listed cs.CL, cs.LG, cs.SD Citations 182 Venue IEEE International Conference on Acoustics, Speech, and Signal Processing Repository https://github.com/iamyuanchung/Autoregressive-Predictive-Coding โญ 189 Last Checked 1 month ago
Abstract
Learning meaningful and general representations from unannotated speech that are applicable to a wide range of tasks remains challenging. In this paper we propose to use autoregressive predictive coding (APC), a recently proposed self-supervised objective, as a generative pre-training approach for learning meaningful, non-specific, and transferable speech representations. We pre-train APC on large-scale unlabeled data and conduct transfer learning experiments on three speech applications that require different information about speech characteristics to perform well: speech recognition, speech translation, and speaker identification. Extensive experiments show that APC not only outperforms surface features (e.g., log Mel spectrograms) and other popular representation learning methods on all three tasks, but is also effective at reducing downstream labeled data size and model parameters. We also investigate the use of Transformers for modeling APC and find it superior to RNNs.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Audio & Speech