๐ฎ
๐ฎ
The Ethereal
On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel Methods and Neural Networks
April 10, 2017 ยท The Ethereal ยท ๐ Neural Information Processing Systems
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Arturs Backurs, Piotr Indyk, Ludwig Schmidt
arXiv ID
1704.02958
Category
cs.CC: Computational Complexity
Cross-listed
cs.DS,
cs.LG,
stat.ML
Citations
43
Venue
Neural Information Processing Systems
Last Checked
1 month ago
Abstract
Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there has been a large body of work on algorithms for various ERM problems, the exact computational complexity of ERM is still not understood. We address this issue for multiple popular ERM problems including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems based on complexity-theoretic assumptions such as the Strong Exponential Time Hypothesis. Under these assumptions, we show that there are no algorithms that solve the aforementioned ERM problems to high accuracy in sub-quadratic time. We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computational Complexity
๐ฎ
๐ฎ
The Ethereal
An Exponential Separation Between Randomized and Deterministic Complexity in the LOCAL Model
๐ฎ
๐ฎ
The Ethereal
The Parallelism Tradeoff: Limitations of Log-Precision Transformers
๐ฎ
๐ฎ
The Ethereal
The Hardness of Approximation of Euclidean k-means
๐ฎ
๐ฎ
The Ethereal
Slightly Superexponential Parameterized Problems
๐ฎ
๐ฎ
The Ethereal