Z. Zhou, P. Mertikopoulos, A. L. Moustakas, N. Bambos, and P. W. Glynn. Operations Research, forthcoming.

We consider the target-rate power management problem for wireless networks and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; and (2) derive distributed power managment algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well- known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multi-agent online eager gradient descent converges to the unique Nash equilibrium in mean square at a $\mathcal{O}(1/T)$ rate. In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e. when all users can concurrently attain their target rates). This allows us to derive geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also weighted strongly monotone and the proposed algorithms converge in mean square to the joint optimal transmission power at a $\mathcal{O}(1/T)$ rate, even when the network is only feasible on average (i.e. users may be unable to meet their requirements with positive probability). This comes in stark contrast to existing algorithms (like the seminal FoschiniāMiljanic algorithm and its variants) that may fail to converge altogether.