Benchmarking Multimodal Sentiment Analysis

July 29, 2017 ยท Declared Dead ยท ๐Ÿ› Conference on Intelligent Text Processing and Computational Linguistics

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Erik Cambria, Devamanyu Hazarika, Soujanya Poria, Amir Hussain, R. B. V. Subramaanyam arXiv ID 1707.09538 Category cs.MM: Multimedia Cross-listed cs.CL Citations 77 Venue Conference on Intelligent Text Processing and Computational Linguistics Last Checked 1 month ago
Abstract
We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speaker-independent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Multimedia

Died the same way โ€” ๐Ÿ‘ป Ghosted