Reasoning About Generalization via Conditional Mutual Information

January 24, 2020 ยท Declared Dead ยท ๐Ÿ› Annual Conference Computational Learning Theory

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Thomas Steinke, Lydia Zakynthinou arXiv ID 2001.09122 Category cs.LG: Machine Learning Cross-listed cs.CR, cs.DS, cs.IT, stat.ML Citations 192 Venue Annual Conference Computational Learning Theory Last Checked 1 month ago
Abstract
We provide an information-theoretic framework for studying the generalization properties of machine learning algorithms. Our framework ties together existing approaches, including uniform convergence bounds and recent methods for adaptive data analysis. Specifically, we use Conditional Mutual Information (CMI) to quantify how well the input (i.e., the training data) can be recognized given the output (i.e., the trained model) of the learning algorithm. We show that bounds on CMI can be obtained from VC dimension, compression schemes, differential privacy, and other methods. We then show that bounded CMI implies various forms of generalization.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ‘ป Ghosted