Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

March 20, 2019 Β· Declared Dead Β· πŸ› Neural Information Processing Systems

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Santosh S. Vempala, Andre Wibisono arXiv ID 1903.08568 Category cs.DS: Data Structures & Algorithms Cross-listed cs.LG, math.PR, stat.ML Citations 323 Venue Neural Information Processing Systems Last Checked 1 month ago
Abstract
We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution $Ξ½= e^{-f}$ on $\mathbb{R}^n$. We prove a convergence guarantee in Kullback-Leibler (KL) divergence assuming $Ξ½$ satisfies a log-Sobolev inequality and the Hessian of $f$ is bounded. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in RΓ©nyi divergence of order $q > 1$ assuming the limit of ULA satisfies either the log-Sobolev or PoincarΓ© inequality. We also prove a bound on the bias of the limiting distribution of ULA assuming third-order smoothness of $f$, without requiring isoperimetry.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Data Structures & Algorithms

Died the same way β€” πŸ‘» Ghosted