Hybrid-Attention Guided Network with Multiple Resolution Features for Person Re-Identification

September 16, 2020 ยท Entered Twilight ยท ๐Ÿ› Information Sciences

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: Base-RCNN-C4.yaml, Base-RetinaNet.yaml, CBAM.py, MultipleFeatureswithoutattention.py, README.md, __init__.py, cbam.py, cuhk, damo, dataset, demo.py, demo, demo_retinanet.py, evaluate.py, evaluate_gpu.py, evaluate_rerank.py, faster_rcnn_R_50_C4.yaml, grad_cam.py, grad_cam_retinanet.py, guided_back_propagation.py, main.py, model, preparecuhkl.py, re_ranking.py, retinanet_R_50_FPN_3x.yaml, samplers.py, test.py, train.py, triplet_loss.py

Authors Guoqing Zhang, Junchuan Yang, Yuhui Zheng, Yi Wu, Shengyong Chen arXiv ID 2009.07536 Category cs.CV: Computer Vision Citations 38 Venue Information Sciences Repository https://github.com/libraflower/MutipleFeature-for-PRID โญ 14 Last Checked 1 month ago
Abstract
Extracting effective and discriminative features is very important for addressing the challenging person re-identification (re-ID) task. Prevailing deep convolutional neural networks (CNNs) usually use high-level features for identifying pedestrian. However, some essential spatial information resided in low-level features such as shape, texture and color will be lost when learning the high-level features, due to extensive padding and pooling operations in the training stage. In addition, most existing person re-ID methods are mainly based on hand-craft bounding boxes where images are precisely aligned. It is unrealistic in practical applications, since the exploited object detection algorithms often produce inaccurate bounding boxes. This will inevitably degrade the performance of existing algorithms. To address these problems, we put forward a novel person re-ID model that fuses high- and low-level embeddings to reduce the information loss caused in learning high-level features. Then we divide the fused embedding into several parts and reconnect them to obtain the global feature and more significant local features, so as to alleviate the affect caused by the inaccurate bounding boxes. In addition, we also introduce the spatial and channel attention mechanisms in our model, which aims to mine more discriminative features related to the target. Finally, we reconstruct the feature extractor to ensure that our model can obtain more richer and robust features. Extensive experiments display the superiority of our approach compared with existing approaches. Our code is available at https://github.com/libraflower/MutipleFeature-for-PRID.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision