Dimensionality Reduction of Massive Sparse Datasets Using Coresets

March 05, 2015 ยท Declared Dead ยท ๐Ÿ› Neural Information Processing Systems

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Dan Feldman, Mikhail Volkov, Daniela Rus arXiv ID 1503.01663 Category cs.DS: Data Structures & Algorithms Citations 55 Venue Neural Information Processing Systems Last Checked 2 months ago
Abstract
In this paper we present a practical solution with performance guarantees to the problem of dimensionality reduction for very large scale sparse matrices. We show applications of our approach to computing the low rank approximation (reduced SVD) of such matrices. Our solution uses coresets, which is a subset of $O(k/\eps^2)$ scaled rows from the $n\times d$ input matrix, that approximates the sub of squared distances from its rows to every $k$-dimensional subspace in $\REAL^d$, up to a factor of $1\pm\eps$. An open theoretical problem has been whether we can compute such a coreset that is independent of the input matrix and also a weighted subset of its rows. %An open practical problem has been whether we can compute a non-trivial approximation to the reduced SVD of very large databases such as the Wikipedia document-term matrix in a reasonable time. We answer this question affirmatively. % and demonstrate an algorithm that efficiently computes a low rank approximation of the entire English Wikipedia. Our main technical result is a novel technique for deterministic coreset construction that is based on a reduction to the problem of $\ell_2$ approximation for item frequencies.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Data Structures & Algorithms

Died the same way โ€” ๐Ÿ‘ป Ghosted