FiLM: Visual Reasoning with a General Conditioning Layer

September 22, 2017 ยท Entered Twilight ยท ๐Ÿ› AAAI Conference on Artificial Intelligence

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, CLEVR_eval_with_q_type.py, LICENSE, README.md, img, requirements.txt, scripts, vr

Authors Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville arXiv ID 1709.07871 Category cs.CV: Computer Vision Cross-listed cs.AI, cs.CL, stat.ML Citations 3.0K Venue AAAI Conference on Artificial Intelligence Repository https://github.com/ethanjperez/film โญ 432 Last Checked 1 month ago
Abstract
We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision