Multitask Learning to Improve Egocentric Action Recognition

September 15, 2019 ยท Entered Twilight ยท ๐Ÿ› 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, combined_log_parser.py, combined_log_parser_excel.py, combined_log_parser_excel_from_val.py, dataset_preparation, lstm_tests.py, main_cnn.py, main_eval_cnn.py, main_eval_lstm.py, main_eval_lstm_polar.py, main_eval_mfnet.py, main_eval_mfnet_gaze.py, main_eval_mfnet_gtea.py, main_eval_mfnet_hands.py, main_eval_mfnet_json.py, main_lstm.py, main_lstm_polar.py, main_mfnet.py, main_mfnet_gtea.py, main_mfnet_hands.py, model_tryout.py, models, outputs, parse_train_log.py, splits, utils

Authors Georgios Kapidis, Ronald Poppe, Elsbeth van Dam, Lucas Noldus, Remco Veltkamp arXiv ID 1909.06761 Category cs.CV: Computer Vision Cross-listed cs.LG Citations 42 Venue 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) Repository https://github.com/georkap/hand_track_classification โญ 6 Last Checked 1 month ago
Abstract
In this work we employ multitask learning to capitalize on the structure that exists in related supervised tasks to train complex neural networks. It allows training a network for multiple objectives in parallel, in order to improve performance on at least one of them by capitalizing on a shared representation that is developed to accommodate more information than it otherwise would for a single task. We employ this idea to tackle action recognition in egocentric videos by introducing additional supervised tasks. We consider learning the verbs and nouns from which action labels consist of and predict coordinates that capture the hand locations and the gaze-based visual saliency for all the frames of the input video segments. This forces the network to explicitly focus on cues from secondary tasks that it might otherwise have missed resulting in improved inference. Our experiments on EPIC-Kitchens and EGTEA Gaze+ show consistent improvements when training with multiple tasks over the single-task baseline. Furthermore, in EGTEA Gaze+ we outperform the state-of-the-art in action recognition by 3.84%. Apart from actions, our method produces accurate hand and gaze estimations as side tasks, without requiring any additional input at test time other than the RGB video clips.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision