SASFormer: Transformers for Sparsely Annotated Semantic Segmentation

December 05, 2022 ยท Declared Dead ยท ๐Ÿ› IEEE International Conference on Multimedia and Expo

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Hui Su, Yue Ye, Wei Hua, Lechao Cheng, Mingli Song arXiv ID 2212.02019 Category cs.CV: Computer Vision Citations 8 Venue IEEE International Conference on Multimedia and Expo Repository https://github.com/su-hui-zz/SASFormer} Last Checked 1 month ago
Abstract
Semantic segmentation based on sparse annotation has advanced in recent years. It labels only part of each object in the image, leaving the remainder unlabeled. Most of the existing approaches are time-consuming and often necessitate a multi-stage training strategy. In this work, we propose a simple yet effective sparse annotated semantic segmentation framework based on segformer, dubbed SASFormer, that achieves remarkable performance. Specifically, the framework first generates hierarchical patch attention maps, which are then multiplied by the network predictions to produce correlated regions separated by valid labels. Besides, we also introduce the affinity loss to ensure consistency between the features of correlation results and network predictions. Extensive experiments showcase that our proposed approach is superior to existing methods and achieves cutting-edge performance. The source code is available at \url{https://github.com/su-hui-zz/SASFormer}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision

Died the same way โ€” ๐Ÿ’€ 404 Not Found