[C93] - No-regret learning in harmonic games: Extrapolation in the presence of conflicting interests

D. Legacci, P. Mertikopoulos, C. H. Papadimitriou, G. Piliouras, and B. S. R. Pradelski. In NeurIPS '24: Proceedings of the 38th International Conference on Neural Information Processing Systems, 2024.

Abstract

The long-run behavior of multi-agent online learning – and, in particular, no-regret learning – is relatively well-understood in potential games, where players have common interests. By contrast, in general harmonic games – the strategic complement of potential games, where players have competing interests – very little is known outside the narrow subclass of $2$-player zero-sum games with a fully-mixed equilibrium. Our paper seeks to partially fill this gap by focusing on the full class of (generalized) harmonic games and examining the convergence properties of “follow-the-regularized-leader” (FTRL), the most widely studied class of no-regret learning schemes. As a first result, we show that the continuous-time dynamics of FTRL are Poincaré recurrent, i.e., they return arbitrarily close to their starting point infinitely often, and hence fail to converge. In discrete time, the standard, “vanilla” implementation of FTRL may lead to even worse outcomes, eventually trapping the players in a perpetual cycle of best-responses. However, if FTRL is augmented with a suitable extrapolation step – which includes as special cases the optimistic and mirror-prox variants of FTRL – we show that learning converges to a Nash equilibrium from any initial condition, and all players are guaranteed at most $\mathcal{O}(1)$ regret. These results provide an in-depth understanding of no-regret learning in harmonic games, nesting prior work on $2$-player zero-sum games, and showing at a high level that potential and harmonic games are complementary not only from the strategic but also from the dynamic viewpoint.

arXiv link: <>

[Spotlight at NeurIPS 2024]

Nifty tech tag lists from Wouter Beeftink