Evaluating Load Balancing Performance in Distributed Storage with Redundancy
October 13, 2019 ยท Declared Dead ยท ๐ International Symposium "Problems of Redundancy in Information and Control Systems"
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Mehmet Fatih Aktas, Amir Behrouzi-Far, Emina Soljanin, Philip Whiting
arXiv ID
1910.05791
Category
cs.PF: Performance
Cross-listed
cs.IT
Citations
8
Venue
International Symposium "Problems of Redundancy in Information and Control Systems"
Last Checked
1 month ago
Abstract
To facilitate load balancing, distributed systems store data redundantly. We evaluate the load balancing performance of storage schemes in which each object is stored at $d$ different nodes, and each node stores the same number of objects. In our model, the load offered for the objects is sampled uniformly at random from all the load vectors with a fixed cumulative value. We find that the load balance in a system of $n$ nodes improves multiplicatively with $d$ as long as $d = o\left(\log(n)\right)$, and improves exponentially once $d = ฮ\left(\log(n)\right)$. We show that the load balance improves in the same way with $d$ when the service choices are created with XOR's of $r$ objects rather than object replicas. In such redundancy schemes, storage overhead is reduced multiplicatively by $r$. However, recovery of an object requires downloading content from $r$ nodes. At the same time, the load balance increases additively by $r$. We express the system's load balance in terms of the maximal spacing or maximum of $d$ consecutive spacings between the ordered statistics of uniform random variables. Using this connection and the limit results on the maximal $d$-spacings, we derive our main results.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Performance
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
A General Formula for the Stationary Distribution of the Age of Information and Its Application to Single-Server Queues
R.I.P.
๐ป
Ghosted
AI Benchmark: All About Deep Learning on Smartphones in 2019
R.I.P.
๐ป
Ghosted
BestConfig: Tapping the Performance Potential of Systems via Automatic Configuration Tuning
R.I.P.
๐ป
Ghosted
Online normalizer calculation for softmax
R.I.P.
๐ป
Ghosted
CLTune: A Generic Auto-Tuner for OpenCL Kernels
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted