Cycles in adversarial regularized learning

September 08, 2017 Β· Declared Dead Β· πŸ› ACM-SIAM Symposium on Discrete Algorithms

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Panayotis Mertikopoulos, Christos Papadimitriou, Georgios Piliouras arXiv ID 1709.02738 Category cs.GT: Game Theory Cross-listed cs.LG Citations 349 Venue ACM-SIAM Symposium on Discrete Algorithms Last Checked 1 month ago
Abstract
Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science. A natural question that arises in these settings is how regularized learning algorithms behave when faced against each other. We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games. We show that the system's behavior is PoincarΓ© recurrent, implying that almost every trajectory revisits any (arbitrarily small) neighborhood of its starting point infinitely often. This cycling behavior is robust to the agents' choice of regularization mechanism (each agent could be using a different regularizer), to positive-affine transformations of the agents' utilities, and it also persists in the case of networked competition, i.e., for zero-sum polymatrix games.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Game Theory

R.I.P. πŸ‘» Ghosted

Blockchain Mining Games

Aggelos Kiayias, Elias Koutsoupias, ... (+2 more)

cs.GT πŸ› EC πŸ“š 273 cites 9 years ago

Died the same way β€” πŸ‘» Ghosted