On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel Methods and Neural Networks

April 10, 2017 ยท The Ethereal ยท ๐Ÿ› Neural Information Processing Systems

๐Ÿ”ฎ THE ETHEREAL: The Ethereal
Pure theory โ€” exists on a plane beyond code

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Arturs Backurs, Piotr Indyk, Ludwig Schmidt arXiv ID 1704.02958 Category cs.CC: Computational Complexity Cross-listed cs.DS, cs.LG, stat.ML Citations 43 Venue Neural Information Processing Systems Last Checked 1 month ago
Abstract
Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there has been a large body of work on algorithms for various ERM problems, the exact computational complexity of ERM is still not understood. We address this issue for multiple popular ERM problems including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems based on complexity-theoretic assumptions such as the Strong Exponential Time Hypothesis. Under these assumptions, we show that there are no algorithms that solve the aforementioned ERM problems to high accuracy in sub-quadratic time. We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computational Complexity