Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion

November 18, 2019 ยท Declared Dead ยท ๐Ÿ› AAAI Conference on Artificial Intelligence

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Sijie Mai, Haifeng Hu, Songlong Xing arXiv ID 1911.07848 Category cs.CV: Computer Vision Cross-listed cs.LG, cs.MM Citations 226 Venue AAAI Conference on Artificial Intelligence Repository https://github.com/TmacMai/ARGF_multimodal_fusion} Last Checked 1 month ago
Abstract
Learning joint embedding space for various modalities is of vital importance for multimodal fusion. Mainstream modality fusion approaches fail to achieve this goal, leaving a modality gap which heavily affects cross-modal fusion. In this paper, we propose a novel adversarial encoder-decoder-classifier framework to learn a modality-invariant embedding space. Since the distributions of various modalities vary in nature, to reduce the modality gap, we translate the distributions of source modalities into that of target modality via their respective encoders using adversarial training. Furthermore, we exert additional constraints on embedding space by introducing reconstruction loss and classification loss. Then we fuse the encoded representations using hierarchical graph neural network which explicitly explores unimodal, bimodal and trimodal interactions in multi-stage. Our method achieves state-of-the-art performance on multiple datasets. Visualization of the learned embeddings suggests that the joint embedding space learned by our method is discriminative. code is available at: \url{https://github.com/TmacMai/ARGF_multimodal_fusion}
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision

Died the same way โ€” ๐Ÿ’€ 404 Not Found