A High Performance Implementation of Spectral Clustering on CPU-GPU Platforms

February 13, 2018 Β· Entered Twilight Β· πŸ› IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 7.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: Benchmark, Dataset, LICENSE, Makefile, Makefile_example.inc, README.md, centroids.h, kmeans.h, labels.cu, labels.h, paper.pdf, spectral_clustering.cu, timer.cu, timer.h

Authors Yu Jin, Joseph F. JaJa arXiv ID 1802.04450 Category cs.DC: Distributed Computing Cross-listed cs.MS Citations 22 Venue IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum Repository https://github.com/yuj-umd/fastsc ⭐ 32 Last Checked 1 month ago
Abstract
Spectral clustering is one of the most popular graph clustering algorithms, which achieves the best performance for many scientific and engineering applications. However, existing implementations in commonly used software platforms such as Matlab and Python do not scale well for many of the emerging Big Data applications. In this paper, we present a fast implementation of the spectral clustering algorithm on a CPU-GPU heterogeneous platform. Our implementation takes advantage of the computational power of the multi-core CPU and the massive multithreading and SIMD capabilities of GPUs. Given the input as data points in high dimensional space, we propose a parallel scheme to build a sparse similarity graph represented in a standard sparse representation format. Then we compute the smallest $k$ eigenvectors of the Laplacian matrix by utilizing the reverse communication interfaces of ARPACK software and cuSPARSE library, where $k$ is typically very large. Moreover, we implement a very fast parallelized $k$-means algorithm on GPUs. Our implementation is shown to be significantly faster compared to the best known Matlab and Python implementations for each step. In addition, our algorithm scales to problems with a very large number of clusters.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing