Are we asking the right questions in MovieQA?

November 08, 2019 ยท Entered Twilight ยท ๐Ÿ› 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, data, main.py, model, w2v_models

Authors Bhavan Jasani, Rohit Girdhar, Deva Ramanan arXiv ID 1911.03083 Category cs.CV: Computer Vision Cross-listed cs.CL Citations 17 Venue 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) Repository https://github.com/bhavanj/MovieQAWithoutMovies โญ 4 Last Checked 7 days ago
Abstract
Joint vision and language tasks like visual question answering are fascinating because they explore high-level understanding, but at the same time, can be more prone to language biases. In this paper, we explore the biases in the MovieQA dataset and propose a strikingly simple model which can exploit them. We find that using the right word embedding is of utmost importance. By using an appropriately trained word embedding, about half the Question-Answers (QAs) can be answered by looking at the questions and answers alone, completely ignoring narrative context from video clips, subtitles, and movie scripts. Compared to the best published papers on the leaderboard, our simple question + answer only model improves accuracy by 5% for video + subtitle category, 5% for subtitle, 15% for DVS and 6% higher for scripts.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision