[C97] - The impact of uncertainty on regularized learning in games

P.-L. Cauvin, D. Legacci, and P. Mertikopoulos. In ICML '25: Proceedings of the 42nd International Conference on Machine Learning, 2025.

Abstract

In this paper, we investigate how randomness and uncertainty influence learning in games. Specifically, we examine a perturbed variant of the dynamics of “follow-the-regularized-leader”, where the players' payoff observations and strategy updates are continually affected by random shocks. Our findings reveal that, in a fairly precise sense, “uncertainty favors extremes”: in any game, regardless of the noise level, every player’s trajectory of play reaches an arbitrarily small neighborhood of a pure strategy in finite time (which we estimate). Moreover, even if the player does not settle at this strategy, they return arbitrarily close to some (possibly different) pure strategy infinitely often. This prompts the question of which sets of pure strategies emerge as robust predictions of learning in the presence of noise and uncertainty. In this regard, we show that (a) the only possible limits of the FTRL dynamics under uncertainty are pure Nash equilibria; and (b) a span of pure strategies is stable and attracting if and only if it is closed under better replies. Finally, we specialize our analysis to games where the dynamics are recurrent in the deterministic setting, such as zero-sum games with an interior equilibrium. In this case, we show that the stochastic dynamics drift toward the boundary on average, thus disrupting the quasi-periodic behavior observed in the noiseless, deterministic regime.

arXiv link: <>

Nifty tech tag lists fromĀ Wouter Beeftink