[C82] - No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation

Y.-G. Hsieh, K. Antonakopoulos, V. Cevher, and P. Mertikopoulos. In NeurIPS '22: Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022.

Abstract

We examine the problem of regret minimization when the learner is involved in a continuous game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully adversarial environments. We study this problem in the context of variationally stable games (a class of continuous games which includes all convex-concave and monotone games), and when the players only have access to noisy estimates of their individual payoff gradients. If the noise is additive, the game-theoretic and purely adversarial settings enjoy similar regret guarantees; however, if the noise is multiplicative, we show that the learners can, in fact, achieve constant regret. We achieve this faster rate via an optimistic gradient scheme with learning rate separation – that is, the method’s extrapolation and update steps are tuned to different schedules, depending on the noise profile. Subsequently, to eliminate the need for delicate hyperparameter tuning, we propose a fully adaptive method that smoothly interpolates between worst- and best-case regret guarantees.

arXiv link: https://arxiv.org/abs/2206.06015

Nifty tech tag lists from Wouter Beeftink