No penalty no tears: Least squares in high-dimensional linear models
June 07, 2015 Β· Declared Dead Β· π International Conference on Machine Learning
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Xiangyu Wang, David Dunson, Chenlei Leng
arXiv ID
1506.02222
Category
stat.ME
Cross-listed
cs.LG,
math.ST,
stat.ML
Citations
16
Venue
International Conference on Machine Learning
Last Checked
1 month ago
Abstract
Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalization-based approaches in simulations and data analyses illustrate the great potential of the proposed algorithms.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β stat.ME
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology
R.I.P.
π»
Ghosted
External Validity: From Do-Calculus to Transportability Across Populations
R.I.P.
π»
Ghosted
Least Ambiguous Set-Valued Classifiers with Bounded Error Levels
R.I.P.
π»
Ghosted
Doubly Robust Policy Evaluation and Optimization
R.I.P.
π»
Ghosted
Comparison of Bayesian predictive methods for model selection
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted