LSTMSE-Net: Long Short Term Speech Enhancement Network for Audio-visual Speech Enhancement

September 03, 2024 ยท Declared Dead ยท ๐Ÿ› 3rd COG-MHEAR Workshop on Audio-Visual Speech Enhancement (AVSEC)

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Arnav Jain, Jasmer Singh Sanjotra, Harshvardhan Choudhary, Krish Agrawal, Rupal Shah, Rohan Jha, M. Sajid, Amir Hussain, M. Tanveer arXiv ID 2409.02266 Category cs.SD: Sound Cross-listed cs.LG, cs.MM, eess.AS Citations 10 Venue 3rd COG-MHEAR Workshop on Audio-Visual Speech Enhancement (AVSEC) Repository https://github.com/mtanveer1/AVSEC-3-Challenge} Last Checked 1 month ago
Abstract
In this paper, we propose long short term memory speech enhancement network (LSTMSE-Net), an audio-visual speech enhancement (AVSE) method. This innovative method leverages the complementary nature of visual and audio information to boost the quality of speech signals. Visual features are extracted with VisualFeatNet (VFN), and audio features are processed through an encoder and decoder. The system scales and concatenates visual and audio features, then processes them through a separator network for optimized speech enhancement. The architecture highlights advancements in leveraging multi-modal data and interpolation techniques for robust AVSE challenge systems. The performance of LSTMSE-Net surpasses that of the baseline model from the COG-MHEAR AVSE Challenge 2024 by a margin of 0.06 in scale-invariant signal-to-distortion ratio (SISDR), $0.03$ in short-time objective intelligibility (STOI), and $1.32$ in perceptual evaluation of speech quality (PESQ). The source code of the proposed LSTMSE-Net is available at \url{https://github.com/mtanveer1/AVSEC-3-Challenge}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound

Died the same way โ€” ๐Ÿ’€ 404 Not Found