[C77] - UnderGrad: A universal black-box optimization method with almost dimension-free convergence rate guarantees

K. Antonakopoulos, D. Q. Vu, V. Cevher, K. Y. Levy, and P. Mertikopoulos. In ICML '22: Proceedings of the 39th International Conference on Machine Learning, 2022.


Universal methods for optimization are designed to achieve theoretically optimal convergence rates without any prior knowledge of the problem’s regularity parameters and/or the accurarcy of the gradient oracle employed by the optimizer. In this regard, existing state-of-the-art algorithms achieve an $\mathcal{O}(1/T^{2})$ value convergence rate in Lipschitz smooth problems with a perfect gradient oracle, and an $\mathcal{O}(1/\sqrt{T})$ convergence rate when the underlying problem is non-smooth or the gradient oracle is stochastic. On the downside, these methods do not take into account the problem’s dimensionality, and this can have a catastrophic impact on the achieved convergence rate, in both theory and practice. Our paper aims to bridge this gap by providing a scalable universal gradient method – dubbed \acs{undergrad} – whose oracle complexity is almost dimension-free in problems with a favorable geometry (like the simplex, linearly constrained semidefinite programs and combinatorial bandits), while retaining the order-optimal dependence on $T$ described above. These “best-of-both-worlds” guarantees are achieved via a primal-dual update scheme inspired by the dual exploration method for solving variational inequalities.

arXiv link: https://arxiv.org/abs/2206.09352

[Oral (long talk) at ICML 2022]

Nifty tech tag lists from Wouter Beeftink