Robustness analytics to data heterogeneity in edge computing

February 12, 2020 Β· Declared Dead Β· πŸ› Computer Communications

πŸ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Jia Qian, Lars Kai Hansen, Xenofon Fafoutis, Prayag Tiwari, Hari Mohan Pandey arXiv ID 2002.05038 Category cs.DC: Distributed Computing Cross-listed cs.LG Citations 5 Venue Computer Communications Repository https://github.com/jiaqian/robustness_of_FL}} Last Checked 1 month ago
Abstract
Federated Learning is a framework that jointly trains a model \textit{with} complete knowledge on a remotely placed centralized server, but \textit{without} the requirement of accessing the data stored in distributed machines. Some work assumes that the data generated from edge devices are identically and independently sampled from a common population distribution. However, such ideal sampling may not be realistic in many contexts. Also, models based on intrinsic agency, such as active sampling schemes, may lead to highly biased sampling. So an imminent question is how robust Federated Learning is to biased sampling? In this work\footnote{\url{https://github.com/jiaqian/robustness_of_FL}}, we experimentally investigate two such scenarios. First, we study a centralized classifier aggregated from a collection of local classifiers trained with data having categorical heterogeneity. Second, we study a classifier aggregated from a collection of local classifiers trained by data through active sampling at the edge. We present evidence in both scenarios that Federated Learning is robust to data heterogeneity when local training iterations and communication frequency are appropriately chosen.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing

Died the same way β€” πŸ’€ 404 Not Found