Semi-supervised Learning for Multi-speaker Text-to-speech Synthesis Using Discrete Speech Representation

May 16, 2020 ยท Entered Twilight ยท ๐Ÿ› Interspeech

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, bin, config, corpus, data, illustration.png, lib, main.py, src, util

Authors Tao Tu, Yuan-Jui Chen, Alexander H. Liu, Hung-yi Lee arXiv ID 2005.08024 Category eess.AS: Audio & Speech Cross-listed cs.CL, cs.SD Citations 8 Venue Interspeech Repository https://github.com/ttaoREtw/semi-tts โญ 39 Last Checked 1 month ago
Abstract
Recently, end-to-end multi-speaker text-to-speech (TTS) systems gain success in the situation where a lot of high-quality speech plus their corresponding transcriptions are available. However, laborious paired data collection processes prevent many institutes from building multi-speaker TTS systems of great performance. In this work, we propose a semi-supervised learning approach for multi-speaker TTS. A multi-speaker TTS model can learn from the untranscribed audio via the proposed encoder-decoder framework with discrete speech representation. The experiment results demonstrate that with only an hour of paired speech data, no matter the paired data is from multiple speakers or a single speaker, the proposed model can generate intelligible speech in different voices. We found the model can benefit from the proposed semi-supervised learning approach even when part of the unpaired speech data is noisy. In addition, our analysis reveals that different speaker characteristics of the paired data have an impact on the effectiveness of semi-supervised TTS.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Audio & Speech