Active Regression via Linear-Sample Sparsification

November 27, 2017 Β· Declared Dead Β· πŸ› Annual Conference Computational Learning Theory

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Xue Chen, Eric Price arXiv ID 1711.10051 Category cs.LG: Machine Learning Cross-listed cs.DS Citations 67 Venue Annual Conference Computational Learning Theory Last Checked 1 month ago
Abstract
We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution $Ξ²^*$ minimizing $\|XΞ²- y\|_2$ given the entire unlabeled dataset $X \in \mathbb{R}^{n \times d}$ but only observing a small number of labels $y_i$. We show that $O(d)$ labels suffice to find a constant factor approximation $\tildeΞ²$: \[ \mathbb{E}[\|X\tildeΞ² - y\|_2^2] \leq 2 \mathbb{E}[\|X Ξ²^* - y\|_2^2]. \] This improves on the best previous result of $O(d \log d)$ from leverage score sampling. We also present results for the \emph{inductive} setting, showing when $\tildeΞ²$ will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning

Died the same way β€” πŸ‘» Ghosted