๐
๐
Old Age
CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations
July 05, 2022 ยท Entered Twilight ยท ๐ NAACL-HLT
Repo contents: .DS_Store, CMakeLists.txt, LICENSE, LICENSE_Matterport3DSimulator, README.md, cmake, connectivity, figures, include, python_requirements.txt, r2r_src, run, src, tasks
Authors
Jialu Li, Hao Tan, Mohit Bansal
arXiv ID
2207.02185
Category
cs.CV: Computer Vision
Cross-listed
cs.AI,
cs.CL,
cs.LG
Citations
12
Venue
NAACL-HLT
Repository
https://github.com/jialuli-luka/CLEAR
โญ 6
Last Checked
1 month ago
Abstract
Vision-and-Language Navigation (VLN) tasks require an agent to navigate through the environment based on language instructions. In this paper, we aim to solve two key challenges in this task: utilizing multilingual instructions for improved instruction-path grounding and navigating through new environments that are unseen during training. To address these challenges, we propose CLEAR: Cross-Lingual and Environment-Agnostic Representations. First, our agent learns a shared and visually-aligned cross-lingual language representation for the three languages (English, Hindi and Telugu) in the Room-Across-Room dataset. Our language representation learning is guided by text pairs that are aligned by visual information. Second, our agent learns an environment-agnostic visual representation by maximizing the similarity between semantically-aligned image pairs (with constraints on object-matching) from different environments. Our environment agnostic visual representation can mitigate the environment bias induced by low-level visual information. Empirically, on the Room-Across-Room dataset, we show that our multilingual agent gets large improvements in all metrics over the strong baseline model when generalizing to unseen environments with the cross-lingual language representation and the environment-agnostic visual representation. Furthermore, we show that our learned language and visual representations can be successfully transferred to the Room-to-Room and Cooperative Vision-and-Dialogue Navigation task, and present detailed qualitative and quantitative generalization and grounding analysis. Our code is available at https://github.com/jialuli-luka/CLEAR
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computer Vision
๐
๐
Old Age
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
R.I.P.
๐ป
Ghosted
You Only Look Once: Unified, Real-Time Object Detection
๐
๐
Old Age
SSD: Single Shot MultiBox Detector
๐
๐
Old Age
Squeeze-and-Excitation Networks
R.I.P.
๐ป
Ghosted