Optimal parameters for bloom-filtered joins in Spark

June 08, 2017 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 8.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, Download_data.py, README.md, abnormal_join_stage.png, analysis.ipynb, applications, ipython_progressbar_iterator.py, model_bloom.png, model_join.png, screenshot-jobs-err20.png, screenshot-join-job.png, spark-events, time_by_job_vs_error.png, total_join_time.png, total_time.png

Authors Ophir Lojkine arXiv ID 1706.02785 Category cs.DC: Distributed Computing Cross-listed cs.DB Citations 1 Venue arXiv.org Repository https://github.com/lovasoa/spark-bloomfiltered-join-analysis/blob/master/analysis.ipynb Last Checked 2 months ago
Abstract
In this paper, we present an algorithm that joins relational database tables efficiently in a distributed environment using Bloom filters of an optimal size. We propose not to use fixed-size bloom filters as in previous research, but to find an optimal size for the bloom filters, by creating a mathematical model of the join algorithm, and then finding the optimal parameters using traditional mathematical optimization. This algorithm with optimal parameters beats both previous approaches using bloom filters and the default SparkSQL engine not only on star-joins, but also on traditional database schema. The experiments were conducted on a standard TPC-H database stored as parquet files on a distributed file system.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing