Recurrent Multi-scale Transformer for High-Resolution Salient Object Detection

August 07, 2023 ยท Entered Twilight ยท ๐Ÿ› ACM Multimedia

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: LICENSE, README.md, convert_pth.py, dataloader_collect, eval.py, evaluation_code.zip, find_best_pth.py, gen_edgemap.py, losses, measures.py, model, myconfig.py, pic, runs1, save_models, summery.py, test.py, train.py, train.sh, train_data

Authors Xinhao Deng, Pingping Zhang, Wei Liu, Huchuan Lu arXiv ID 2308.03826 Category cs.CV: Computer Vision Cross-listed cs.AI, cs.MM Citations 31 Venue ACM Multimedia Repository https://github.com/DrowsyMon/RMFormer โญ 34 Last Checked 1 month ago
Abstract
Salient Object Detection (SOD) aims to identify and segment the most conspicuous objects in an image or video. As an important pre-processing step, it has many potential applications in multimedia and vision tasks. With the advance of imaging devices, SOD with high-resolution images is of great demand, recently. However, traditional SOD methods are largely limited to low-resolution images, making them difficult to adapt to the development of High-Resolution SOD (HRSOD). Although some HRSOD methods emerge, there are no large enough datasets for training and evaluating. Besides, current HRSOD methods generally produce incomplete object regions and irregular object boundaries. To address above issues, in this work, we first propose a new HRS10K dataset, which contains 10,500 high-quality annotated images at 2K-8K resolution. As far as we know, it is the largest dataset for the HRSOD task, which will significantly help future works in training and evaluating models. Furthermore, to improve the HRSOD performance, we propose a novel Recurrent Multi-scale Transformer (RMFormer), which recurrently utilizes shared Transformers and multi-scale refinement architectures. Thus, high-resolution saliency maps can be generated with the guidance of lower-resolution predictions. Extensive experiments on both high-resolution and low-resolution benchmarks show the effectiveness and superiority of the proposed framework. The source code and dataset are released at: https://github.com/DrowsyMon/RMFormer.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision