Self-supervised learning of a facial attribute embedding from video

August 21, 2018 ยท Entered Twilight ยท ๐Ÿ› British Machine Vision Conference

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: Datasets, FAb-Net, LICENSE, README.md

Authors Olivia Wiles, A. Sophia Koepke, Andrew Zisserman arXiv ID 1808.06882 Category cs.CV: Computer Vision Citations 140 Venue British Machine Vision Conference Repository https://github.com/oawiles/FAb-Net โญ 87 Last Checked 6 days ago
Abstract
We propose a self-supervised framework for learning facial attributes by simply watching videos of a human face speaking, laughing, and moving over time. To perform this task, we introduce a network, Facial Attributes-Net (FAb-Net), that is trained to embed multiple frames from the same video face-track into a common low-dimensional space. With this approach, we make three contributions: first, we show that the network can leverage information from multiple source frames by predicting confidence/attention masks for each frame; second, we demonstrate that using a curriculum learning regime improves the learned embedding; finally, we demonstrate that the network learns a meaningful face embedding that encodes information about head pose, facial landmarks and facial expression, i.e. facial attributes, without having been supervised with any labelled data. We are comparable or superior to state-of-the-art self-supervised methods on these tasks and approach the performance of supervised methods.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision