Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization

October 18, 2020 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, backpackdp.py, data.py, jaxdp.py, memory_experiment.py, opacusdp.py, owkindp.py, pytorch.py, pyvacydp.py, requirements.txt, results, runtime_experiment.py, text_xla_dumps, tf1dp.py, tf2dp.py, utils.py, xla_logs.zip

Authors Pranav Subramani, Nicholas Vadivelu, Gautam Kamath arXiv ID 2010.09063 Category cs.LG: Machine Learning Cross-listed cs.CR, cs.PF Citations 88 Venue Neural Information Processing Systems Repository https://github.com/TheSalon/fast-dpsgd โญ 59 Last Checked 1 month ago
Abstract
A common pain point in differentially private machine learning is the significant runtime overhead incurred when executing Differentially Private Stochastic Gradient Descent (DPSGD), which may be as large as two orders of magnitude. We thoroughly demonstrate that by exploiting powerful language primitives, including vectorization, just-in-time compilation, and static graph optimization, one can dramatically reduce these overheads, in many cases nearly matching the best non-private running times. These gains are realized in two frameworks: JAX and TensorFlow. JAX provides rich support for these primitives as core features of the language through the XLA compiler. We also rebuild core parts of TensorFlow Privacy, integrating features from TensorFlow 2 as well as XLA compilation, granting significant memory and runtime improvements over the current release version. These approaches allow us to achieve up to 50x speedups in comparison to the best alternatives. Our code is available at https://github.com/TheSalon/fast-dpsgd.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning