Fast Convergence of Regularized Learning in Games

July 02, 2015 ยท Declared Dead ยท ๐Ÿ› Neural Information Processing Systems

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire arXiv ID 1507.00407 Category cs.GT: Game Theory Cross-listed cs.AI, cs.LG Citations 296 Venue Neural Information Processing Systems Last Checked 1 month ago
Abstract
We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at $O(T^{-3/4})$, while the sum of utilities converges to an approximate optimum at $O(T^{-1})$--an improvement upon the worst case $O(T^{-1/2})$ rates. We show a black-box reduction for any algorithm in the class to achieve $\tilde{O}(T^{-1/2})$ rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of [Rakhlin and Shridharan 2013] and [Daskalakis et al. 2014], who only analyzed two-player zero-sum games for specific algorithms.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Game Theory

R.I.P. ๐Ÿ‘ป Ghosted

Blockchain Mining Games

Aggelos Kiayias, Elias Koutsoupias, ... (+2 more)

cs.GT ๐Ÿ› EC ๐Ÿ“š 273 cites 9 years ago

Died the same way โ€” ๐Ÿ‘ป Ghosted