AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

October 21, 2020 ยท Entered Twilight ยท ๐Ÿ› International Conference on Pattern Recognition

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, sample_inputs, test, trained_models

Authors Ha Thi Phuong Thao, Balamurali B. T., Dorien Herremans, Gemma Roig arXiv ID 2010.11188 Category cs.SD: Sound Cross-listed cs.CV, eess.AS Citations 8 Venue International Conference on Pattern Recognition Repository https://github.com/ivyha010/AttendAffectNet โญ 16 Last Checked 1 month ago
Abstract
In this work, we propose different variants of the self-attention based network for emotion prediction from movies, which we call AttendAffectNet. We take both audio and video into account and incorporate the relation among multiple modalities by applying self-attention mechanism in a novel manner into the extracted features for emotion prediction. We compare it to the typically temporal integration of the self-attention based model, which in our case, allows to capture the relation of temporal representations of the movie while considering the sequential dependencies of emotion responses. We demonstrate the effectiveness of our proposed architectures on the extended COGNIMUSE dataset [1], [2] and the MediaEval 2016 Emotional Impact of Movies Task [3], which consist of movies with emotion annotations. Our results show that applying the self-attention mechanism on the different audio-visual features, rather than in the time domain, is more effective for emotion prediction. Our approach is also proven to outperform many state-ofthe-art models for emotion prediction. The code to reproduce our results with the models' implementation is available at: https://github.com/ivyha010/AttendAffectNet.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound