Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations
May 15, 2019 ยท Entered Twilight ยท ๐ Neural Information Processing Systems
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: KarpathySplit.py, Models, NeurIPS2019_MIA_poster.pdf, README.md, Test.py, Train.py, build_vocab.py, coco, data, data_loader.py, model.png, resize_images.py
Authors
Fenglin Liu, Yuanxin Liu, Xuancheng Ren, Xiaodong He, Xu Sun
arXiv ID
1905.06139
Category
cs.CL: Computation & Language
Cross-listed
cs.CV
Citations
93
Venue
Neural Information Processing Systems
Repository
https://github.com/fenglinliu98/MIA
โญ 65
Last Checked
1 month ago
Abstract
In vision-and-language grounding problems, fine-grained representations of the image are considered to be of paramount importance. Most of the current systems incorporate visual features and textual concepts as a sketch of an image. However, plainly inferred representations are usually undesirable in that they are composed of separate components, the relations of which are elusive. In this work, we aim at representing an image with a set of integrated visual regions and corresponding textual concepts, reflecting certain semantics. To this end, we build the Mutual Iterative Attention (MIA) module, which integrates correlated visual features and textual concepts, respectively, by aligning the two modalities. We evaluate the proposed approach on two representative vision-and-language grounding tasks, i.e., image captioning and visual question answering. In both tasks, the semantic-grounded image representations consistently boost the performance of the baseline models under all metrics across the board. The results demonstrate that our approach is effective and generalizes well to a wide range of models for image-related applications. (The code is available at https://github.com/fenglinliu98/MIA)
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted