Balancing Shared and Task-Specific Representations: A Hybrid Approach to Depth-Aware Video Panoptic Segmentation

December 10, 2024 ยท Entered Twilight ยท ๐Ÿ› IEEE Workshop/Winter Conference on Applications of Computer Vision

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, ADVANCED_USAGE.md, CODE_OF_CONDUCT.md, CONTRIBUTING.md, GETTING_STARTED.md, INSTALL.md, LICENSE, MODEL_ZOO.md, README.md, cog.yaml, configs, datasets, demo, demo_video, mask2former, mask2former_video, predict.py, requirements.txt, tools, train_net.py, train_net_video.py

Authors Kurt H. W. Stolle arXiv ID 2412.07966 Category cs.CV: Computer Vision Citations 0 Venue IEEE Workshop/Winter Conference on Applications of Computer Vision Repository https://github.com/facebookresearch/Mask2Former โญ 3288 Last Checked 5 days ago
Abstract
In this work, we present Multiformer, a novel approach to depth-aware video panoptic segmentation (DVPS) based on the mask transformer paradigm. Our method learns object representations that are shared across segmentation, monocular depth estimation, and object tracking subtasks. In contrast to recent unified approaches that progressively refine a common object representation, we propose a hybrid method using task-specific branches within each decoder block, ultimately fusing them into a shared representation at the block interfaces. Extensive experiments on the Cityscapes-DVPS and SemKITTI-DVPS datasets demonstrate that Multiformer achieves state-of-the-art performance across all DVPS metrics, outperforming previous methods by substantial margins. With a ResNet-50 backbone, Multiformer surpasses the previous best result by 3.0 DVPQ points while also improving depth estimation accuracy. Using a Swin-B backbone, Multiformer further improves performance by 4.0 DVPQ points. Multiformer also provides valuable insights into the design of multi-task decoder architectures.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision