[J22] - Distributed stochastic optimization via matrix exponential learning

P. Mertikopoulos, E. V. Belmega, R. Negrel, and L. Sanguinetti. IEEE Transactions on Signal Processing, vol. 65, no. 9, pp. 2277-2290, May 2017.


In this paper, we investigate a distributed learning scheme for a broad class of stochastic optimization problems and games that arise in signal processing and wireless communications. The proposed algorithm relies on the method of matrix exponential learning (MXL) and only requires locally computable gradient observations that are possibly imperfect and/or obsolete. To analyze it, we introduce the notion of a stable Nash equilibrium and we show that the algorithm is globally convergent to such equilibria - or locally convergent when an equilibrium is only locally stable. We also derive an explicit linear bound for the algorithm’s convergence speed, which remains valid under measurement errors and uncertainty of arbitrarily high variance. To validate our theoretical analysis, we test the algorithm in realistic multiple-carrier/multiple-antenna wireless scenarios where several users seek to maximize their energy efficiency. Our results show that learning allows users to attain a net increase between 100% and 500% in energy efficiency, even under very high uncertainty.

arXiv link: https://arxiv.org/abs/1606.01190

Nifty tech tag lists fromĀ Wouter Beeftink