A Generic Approach for Reproducible Model Distillation

November 22, 2022 Β· Entered Twilight Β· πŸ› Machine-mediated learning

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: FRL.png, FRL, LICENSE, README.md, SymRegress, Tree, heatmap1.png, heatmap2.png, heatmap_ind.png, requirements.txt, symbolic.png, tree.png, uci_mammo_data.csv

Authors Yunzhe Zhou, Peiru Xu, Giles Hooker arXiv ID 2211.12631 Category stat.ML: Machine Learning (Stat) Cross-listed cs.LG Citations 3 Venue Machine-mediated learning Repository https://github.com/yunzhe-zhou/GenericDistillation ⭐ 2 Last Checked 1 month ago
Abstract
Model distillation has been a popular method for producing interpretable machine learning. It uses an interpretable "student" model to mimic the predictions made by the black box "teacher" model. However, when the student model is sensitive to the variability of the data sets used for training even when keeping the teacher fixed, the corresponded interpretation is not reliable. Existing strategies stabilize model distillation by checking whether a large enough corpus of pseudo-data is generated to reliably reproduce student models, but methods to do so have so far been developed for a specific student model. In this paper, we develop a generic approach for stable model distillation based on central limit theorem for the average loss. We start with a collection of candidate student models and search for candidates that reasonably agree with the teacher. Then we construct a multiple testing framework to select a corpus size such that the consistent student model would be selected under different pseudo samples. We demonstrate the application of our proposed approach on three commonly used intelligible models: decision trees, falling rule lists and symbolic regression. Finally, we conduct simulation experiments on Mammographic Mass and Breast Cancer datasets and illustrate the testing procedure throughout a theoretical analysis with Markov process. The code is publicly available at https://github.com/yunzhe-zhou/GenericDistillation.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning (Stat)

R.I.P. πŸ‘» Ghosted

Graph Attention Networks

Petar VeličkoviΔ‡, Guillem Cucurull, ... (+4 more)

stat.ML πŸ› ICLR πŸ“š 24.7K cites 8 years ago
R.I.P. πŸ‘» Ghosted

Layer Normalization

Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton

stat.ML πŸ› arXiv πŸ“š 12.0K cites 9 years ago