Attention-based Audio-Visual Fusion for Robust Automatic Speech Recognition

September 05, 2018 ยท Entered Twilight ยท ๐Ÿ› International Conference on Multimodal Interaction

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .github, .gitignore, LICENSE, README.md, avsr, datasets, experiment_lrs2_lm.py, extract_faces.py, run_audio.py, run_audiovisual.py, run_video.py, write_records_tcd.py

Authors George Sterpu, Christian Saam, Naomi Harte arXiv ID 1809.01728 Category eess.AS: Audio & Speech Cross-listed cs.LG, cs.SD, eess.IV, stat.ML Citations 71 Venue International Conference on Multimodal Interaction Repository https://github.com/georgesterpu/Sigmedia-AVSR โญ 83 Last Checked 1 month ago
Abstract
Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities. Code available online on GitHub: https://github.com/georgesterpu/Sigmedia-AVSR
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Audio & Speech