Engineering fast multilevel support vector machines

July 24, 2017 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, .gitmodules, Bibliography.txt, LICENSE, README.md, docs, flann, install_flann.sh, install_mlsvm.sh, petsc, petsc_configure.sh, pyflann, src

Authors E. Sadrfaridpour, T. Razzaghi, I. Safro arXiv ID 1707.07657 Category cs.LG: Machine Learning Cross-listed cs.DS, stat.CO, stat.ML Citations 4 Venue arXiv.org Repository https://github.com/esadr/mlsvm โญ 24 Last Checked 2 months ago
Abstract
The computational complexity of solving nonlinear support vector machine (SVM) is prohibitive on large-scale data. In particular, this issue becomes very sensitive when the data represents additional difficulties such as highly imbalanced class sizes. Typically, nonlinear kernels produce significantly higher classification quality to linear kernels but introduce extra kernel and model parameters which requires computationally expensive fitting. This increases the quality but also reduces the performance dramatically. We introduce a generalized fast multilevel framework for regular and weighted SVM and discuss several versions of its algorithmic components that lead to a good trade-off between quality and time. Our framework is implemented using PETSc which allows an easy integration with scientific computing tasks. The experimental results demonstrate significant speed up compared to the state-of-the-art nonlinear SVM libraries. Reproducibility: our source code, documentation and parameters are available at https:// github.com/esadr/mlsvm.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning