Fast and Robust Distributed Learning in High Dimension

May 05, 2019 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 6.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, README.md, acm_badges.png, aggregators, cluster.py, config.py, deploy.py, experiments.sh, experiments, external, graph.py, native, runner.py, tf_patches, tools

Authors El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault arXiv ID 1905.04374 Category cs.DC: Distributed Computing Cross-listed cs.CR, cs.LG, stat.ML Citations 16 Venue arXiv.org Repository https://github.com/LPD-EPFL/AggregaThor ⭐ 40 Last Checked 1 month ago
Abstract
Could a gradient aggregation rule (GAR) for distributed machine learning be both robust and fast? This paper answers by the affirmative through multi-Bulyan. Given $n$ workers, $f$ of which are arbitrary malicious (Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong form of Byzantine resilience, as well as an ${\frac{m}{n}}$ slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning. When $m \approx n$ (almost all workers are correct), multi-Bulyan reaches the speed of averaging. We also prove that multi-Bulyan's cost in local computation is $O(d)$ (like averaging), an important feature for ML where $d$ commonly reaches $10^9$, while robust alternatives have at least quadratic cost in $d$. Our theoretical findings are complemented with an experimental evaluation which, in addition to supporting the linear $O(d)$ complexity argument, conveys the fact that multi-Bulyan's parallelisability further adds to its efficiency.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing