Semantic Foreground Inpainting from Weak Supervision

September 10, 2019 ยท Entered Twilight ยท ๐Ÿ› IEEE Robotics and Automation Letters

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, checkpoints, cs_data_loader.py, dataset, metrics, ours_extractors.py, ours_model.py, ours_test.py, ours_train.py, requirements.txt, util.py

Authors Chenyang Lu, Gijs Dubbelman arXiv ID 1909.04564 Category cs.CV: Computer Vision Cross-listed cs.RO Citations 13 Venue IEEE Robotics and Automation Letters Repository https://github.com/Chenyang-Lu/semantic-foreground-inpainting โญ 7 Last Checked 1 month ago
Abstract
Semantic scene understanding is an essential task for self-driving vehicles and mobile robots. In our work, we aim to estimate a semantic segmentation map, in which the foreground objects are removed and semantically inpainted with background classes, from a single RGB image. This semantic foreground inpainting task is performed by a single-stage convolutional neural network (CNN) that contains our novel max-pooling as inpainting (MPI) module, which is trained with weak supervision, i.e., it does not require manual background annotations for the foreground regions to be inpainted. Our approach is inherently more efficient than the previous two-stage state-of-the-art method, and outperforms it by a margin of 3% IoU for the inpainted foreground regions on Cityscapes. The performance margin increases to 6% IoU, when tested on the unseen KITTI dataset. The code and the manually annotated datasets for testing are shared with the research community at https://github.com/Chenyang-Lu/semantic-foreground-inpainting.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision