Synthesizing the Unseen for Zero-shot Object Detection

October 19, 2020 ยท Entered Twilight ยท ๐Ÿ› Asian Conference on Computer Vision

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, Docker.md, Dockerfile, ImageNet2017, MSCOCO, README.md, VOC, arguments.py, checkpoints, cls_models.py, dataset.py, dockerRun.md, environment.yml, generate.py, images, mmdetection, model.py, plot.py, plot_tsne.py, script, train_cls.py, train_gan.py, train_unseen_classifier.ipynb, trainer.py, util.py

Authors Nasir Hayat, Munawar Hayat, Shafin Rahman, Salman Khan, Syed Waqas Zamir, Fahad Shahbaz Khan arXiv ID 2010.09425 Category cs.CV: Computer Vision Citations 70 Venue Asian Conference on Computer Vision Repository https://github.com/nasir6/zero_shot_detection โญ 64 Last Checked 1 month ago
Abstract
The existing zero-shot detection approaches project visual features to the semantic domain for seen objects, hoping to map unseen objects to their corresponding semantics during inference. However, since the unseen objects are never visualized during training, the detection model is skewed towards seen content, thereby labeling unseen as background or a seen class. In this work, we propose to synthesize visual features for unseen classes, so that the model learns both seen and unseen objects in the visual domain. Consequently, the major challenge becomes, how to accurately synthesize unseen objects merely using their class semantics? Towards this ambitious goal, we propose a novel generative model that uses class-semantics to not only generate the features but also to discriminatively separate them. Further, using a unified model, we ensure the synthesized features have high diversity that represents the intra-class differences and variable localization precision in the detected bounding boxes. We test our approach on three object detection benchmarks, PASCAL VOC, MSCOCO, and ILSVRC detection, under both conventional and generalized settings, showing impressive gains over the state-of-the-art methods. Our codes are available at https://github.com/nasir6/zero_shot_detection.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision