Exploiting Temporal Relationships in Video Moment Localization with Natural Language

August 11, 2019 ยท Declared Dead ยท ๐Ÿ› ACM Multimedia

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Songyang Zhang, Jinsong Su, Jiebo Luo arXiv ID 1908.03846 Category cs.MM: Multimedia Cross-listed cs.CV, cs.IR Citations 77 Venue ACM Multimedia Last Checked 1 month ago
Abstract
We address the problem of video moment localization with natural language, i.e. localizing a video segment described by a natural language sentence. While most prior work focuses on grounding the query as a whole, temporal dependencies and reasoning between events within the text are not fully considered. In this paper, we propose a novel Temporal Compositional Modular Network (TCMN) where a tree attention network first automatically decomposes a sentence into three descriptions with respect to the main event, context event and temporal signal. Two modules are then utilized to measure the visual similarity and location similarity between each segment and the decomposed descriptions. Moreover, since the main event and context event may rely on different modalities (RGB or optical flow), we use late fusion to form an ensemble of four models, where each model is independently trained by one combination of the visual input. Experiments show that our model outperforms the state-of-the-art methods on the TEMPO dataset.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Multimedia

Died the same way โ€” ๐Ÿ‘ป Ghosted