ChatVTG: Video Temporal Grounding via Chat with Video Dialogue Large Language Models

October 01, 2024 ยท Declared Dead ยท ๐Ÿ› 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Mengxue Qu, Xiaodong Chen, Wu Liu, Alicia Li, Yao Zhao arXiv ID 2410.12813 Category cs.MM: Multimedia Cross-listed cs.CV Citations 40 Venue 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) Last Checked 1 month ago
Abstract
Video Temporal Grounding (VTG) aims to ground specific segments within an untrimmed video corresponding to the given natural language query. Existing VTG methods largely depend on supervised learning and extensive annotated data, which is labor-intensive and prone to human biases. To address these challenges, we present ChatVTG, a novel approach that utilizes Video Dialogue Large Language Models (LLMs) for zero-shot video temporal grounding. Our ChatVTG leverages Video Dialogue LLMs to generate multi-granularity segment captions and matches these captions with the given query for coarse temporal grounding, circumventing the need for paired annotation data. Furthermore, to obtain more precise temporal grounding results, we employ moment refinement for fine-grained caption proposals. Extensive experiments on three mainstream VTG datasets, including Charades-STA, ActivityNet-Captions, and TACoS, demonstrate the effectiveness of ChatVTG. Our ChatVTG surpasses the performance of current zero-shot methods.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Multimedia

Died the same way โ€” ๐Ÿ‘ป Ghosted