๐
๐
Old Age
Dual-Head Knowledge Distillation: Enhancing Logits Utilization with an Auxiliary Head
November 13, 2024 ยท Entered Twilight ยท ๐ Knowledge Discovery and Data Mining
Repo contents: .gitignore, README.md, dataset, distiller_zoo, eval_student_imagenet.py, helper, models, requirements.txt, train_student.py, train_student_imagenet.py, train_teacher.py
Authors
Penghui Yang, Chen-Chen Zong, Sheng-Jun Huang, Lei Feng, Bo An
arXiv ID
2411.08937
Category
cs.CV: Computer Vision
Cross-listed
cs.LG
Citations
4
Venue
Knowledge Discovery and Data Mining
Repository
https://github.com/penghui-yang/DHKD
โญ 4
Last Checked
1 month ago
Abstract
Traditional knowledge distillation focuses on aligning the student's predicted probabilities with both ground-truth labels and the teacher's predicted probabilities. However, the transition to predicted probabilities from logits would obscure certain indispensable information. To address this issue, it is intuitive to additionally introduce a logit-level loss function as a supplement to the widely used probability-level loss function, for exploiting the latent information of logits. Unfortunately, we empirically find that the amalgamation of the newly introduced logit-level loss and the previous probability-level loss will lead to performance degeneration, even trailing behind the performance of employing either loss in isolation. We attribute this phenomenon to the collapse of the classification head, which is verified by our theoretical analysis based on the neural collapse theory. Specifically, the gradients of the two loss functions exhibit contradictions in the linear classifier yet display no such conflict within the backbone. Drawing from the theoretical analysis, we propose a novel method called dual-head knowledge distillation, which partitions the linear classifier into two classification heads responsible for different losses, thereby preserving the beneficial effects of both losses on the backbone while eliminating adverse influences on the classification head. Extensive experiments validate that our method can effectively exploit the information inside the logits and achieve superior performance against state-of-the-art counterparts. Our code is available at: https://github.com/penghui-yang/DHKD.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computer Vision
๐
๐
Old Age
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
R.I.P.
๐ป
Ghosted
You Only Look Once: Unified, Real-Time Object Detection
๐
๐
Old Age
SSD: Single Shot MultiBox Detector
๐
๐
Old Age
Squeeze-and-Excitation Networks
R.I.P.
๐ป
Ghosted