Multi-scale exploration of convex functions and bandit convex optimization
July 23, 2015 Β· Declared Dead Β· π Annual Conference Computational Learning Theory
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
SΓ©bastien Bubeck, Ronen Eldan
arXiv ID
1507.06580
Category
math.MG
Cross-listed
cs.LG,
math.OC,
math.PR,
stat.ML
Citations
74
Venue
Annual Conference Computational Learning Theory
Last Checked
1 month ago
Abstract
We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decade-old open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is $\tilde{O}(\mathrm{poly}(n) \sqrt{T})$, where $n$ is the dimension and $T$ the number of rounds. This bound is obtained by studying the dual Bayesian maximin regret via the information ratio analysis of Russo and Van Roy, and then using the multi-scale exploration to solve the Bayesian problem.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β math.MG
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Metric dimension reduction: A snapshot of the Ribe program
R.I.P.
π»
Ghosted
Packings in real projective spaces
R.I.P.
π»
Ghosted
Geometric stability via information theory
R.I.P.
π»
Ghosted
A spectral gap precludes low-dimensional embeddings
R.I.P.
π»
Ghosted
Family Ties: Relating Poncelet 3-Periodics by their Properties
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted