Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised Performance

June 16, 2020 ยท Entered Twilight ยท ๐Ÿ› Frontiers in Artificial Intelligence

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: PT-BOSS, README.md, TF-BOSS

Authors Leslie N. Smith, Adam Conovaloff arXiv ID 2006.09363 Category cs.LG: Machine Learning Cross-listed cs.CV, cs.NE, eess.IV, stat.ML Citations 8 Venue Frontiers in Artificial Intelligence Repository https://github.com/lnsmith54/BOSS โญ 36 Last Checked 1 month ago
Abstract
Reaching the performance of fully supervised learning with unlabeled data and only labeling one sample per class might be ideal for deep learning applications. We demonstrate for the first time the potential for building one-shot semi-supervised (BOSS) learning on Cifar-10 and SVHN up to attain test accuracies that are comparable to fully supervised learning. Our method combines class prototype refining, class balancing, and self-training. A good prototype choice is essential and we propose a technique for obtaining iconic examples. In addition, we demonstrate that class balancing methods substantially improve accuracy results in semi-supervised learning to levels that allow self-training to reach the level of fully supervised learning performance. Rigorous empirical evaluations provide evidence that labeling large datasets is not necessary for training deep neural networks. We made our code available at https://github.com/lnsmith54/BOSS to facilitate replication and for use with future real-world applications.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning