[C94] - Accelerated regularized learning in finite $N$-person games

K. Lotidis, A. Giannou, P. Mertikopoulos, and N. Bambos. In NeurIPS '24: Proceedings of the 38th International Conference on Neural Information Processing Systems, 2024.

Abstract

Motivated by the success of Nesterov’s accelerated gradient algorithm for convex minimization problems, we examine whether it is possible to achieve similar performance gains in the context of online learning in games. To that end, we introduce a family of accelerated learning methods, which we call “follow the accelerated leader” (FTXL), and which incorporates the use of momentum within the general framework of regularized learning - and, in particular, the exponential / multiplicative weights algorithm and its variants. Drawing inspiration and techniques from the continuous-time analysis of Nesterov’s algorithm, we show that FTXL converges locally to strict Nash equilibria at a quadratic, superlinear rate, achieving in this way an exponential speed-up over vanilla regularized learning methods (which, by comparison, converge to strict equilibria at a geometric, linear rate). Importantly, the FTXL maintains its quadratic convergence rate in a broad range of feedback structures, from deterministic, full information models to stochastic, realization-based ones, and even bandit, payoff-based information, where players are only able to observe their individual realized payoffs.

arXiv link: <>

Nifty tech tag lists from Wouter Beeftink