Active Regression via Linear-Sample Sparsification
November 27, 2017 Β· Declared Dead Β· π Annual Conference Computational Learning Theory
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Xue Chen, Eric Price
arXiv ID
1711.10051
Category
cs.LG: Machine Learning
Cross-listed
cs.DS
Citations
67
Venue
Annual Conference Computational Learning Theory
Last Checked
1 month ago
Abstract
We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution $Ξ²^*$ minimizing $\|XΞ²- y\|_2$ given the entire unlabeled dataset $X \in \mathbb{R}^{n \times d}$ but only observing a small number of labels $y_i$. We show that $O(d)$ labels suffice to find a constant factor approximation $\tildeΞ²$: \[ \mathbb{E}[\|X\tildeΞ² - y\|_2^2] \leq 2 \mathbb{E}[\|X Ξ²^* - y\|_2^2]. \] This improves on the best previous result of $O(d \log d)$ from leverage score sampling. We also present results for the \emph{inductive} setting, showing when $\tildeΞ²$ will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
π»
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
π»
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
π»
Ghosted
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
You Only Look Once: Unified, Real-Time Object Detection
R.I.P.
π»
Ghosted
A Unified Approach to Interpreting Model Predictions
R.I.P.
π»
Ghosted