Soft-Label Dataset Distillation and Text Dataset Distillation

October 06, 2019 ยท Declared Dead ยท ๐Ÿ› IEEE International Joint Conference on Neural Network

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Ilia Sucholutsky, Matthias Schonlau arXiv ID 1910.02551 Category cs.LG: Machine Learning Cross-listed cs.AI, stat.ML Citations 151 Venue IEEE International Joint Conference on Neural Network Repository https://github.com/ilia10000/dataset-distillation}{\text{https://github.com/ilia10000/dataset-distillation}}$ Last Checked 1 month ago
Abstract
Dataset distillation is a method for reducing dataset sizes by learning a small number of synthetic samples containing all the information of a large dataset. This has several benefits like speeding up model training, reducing energy consumption, and reducing required storage space. Currently, each synthetic sample is assigned a single `hard' label, and also, dataset distillation can currently only be used with image data. We propose to simultaneously distill both images and their labels, thus assigning each synthetic sample a `soft' label (a distribution of labels). Our algorithm increases accuracy by 2-4% over the original algorithm for several image classification tasks. Using `soft' labels also enables distilled datasets to consist of fewer samples than there are classes as each sample can encode information for multiple classes. For example, training a LeNet model with 10 distilled images (one per class) results in over 96% accuracy on MNIST, and almost 92% accuracy when trained on just 5 distilled images. We also extend the dataset distillation algorithm to distill sequential datasets including texts. We demonstrate that text distillation outperforms other methods across multiple datasets. For example, models attain almost their original accuracy on the IMDB sentiment analysis task using just 20 distilled sentences. Our code can be found at $\href{https://github.com/ilia10000/dataset-distillation}{\text{https://github.com/ilia10000/dataset-distillation}}$.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ’€ 404 Not Found