MOD: A Deep Mixture Model with Online Knowledge Distillation for Large Scale Video Temporal Concept Localization

October 27, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .idea, CONTRIBUTING.md, LICENSE, README.md, __init__.py, analysis, average_precision_calculator.py, cloudml-gpu.yaml, convert_prediction_from_json_to_csv.py, docs, eval.py, eval_util.py, export_model.py, feature_extractor, frame_level_models.py, inference.py, losses.py, mean_average_precision_calculator.py, model_utils.py, models.py, nextvlad.py, parallel_eval.py, parallel_script, parallel_train.py, readers.py, seg_script, segment_eval_inference.py, segment_label_ids.csv, train.py, utils.py, video_level_models.py

Authors Rongcheng Lin, Jing Xiao, Jianping Fan arXiv ID 1910.12295 Category cs.CV: Computer Vision Citations 4 Venue arXiv.org Repository https://github.com/linrongc/solution_youtube8m_v3 โญ 16 Last Checked 2 months ago
Abstract
In this paper, we present and discuss a deep mixture model with online knowledge distillation (MOD) for large-scale video temporal concept localization, which is ranked 3rd in the 3rd YouTube-8M Video Understanding Challenge. Specifically, we find that by enabling knowledge sharing with online distillation, fintuning a mixture model on a smaller dataset can achieve better evaluation performance. Based on this observation, in our final solution, we trained and fintuned 12 NeXtVLAD models in parallel with a 2-layer online distillation structure. The experimental results show that the proposed distillation structure can effectively avoid overfitting and shows superior generalization performance. The code is publicly available at: https://github.com/linrongc/solution_youtube8m_v3
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision