[C39] - On the convergence of stochastic forward-backward-forward algorithms with variance reduction

M. Staudigl, R. I. Bot, P. T. Vuong, and P. Mertikopoulos. In NeurIPS '18: Workshop on Smooth Games, Optimization and Machine Learning (SGO&ML).

Abstract

We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng’s forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich’s extragradient method when solving variational inequalities over a convex and closed set governed by pseudo- monotone and Lipschitz continuous operators. The main computational advantage of Tseng’s algorithm is that it relies only on a single projection step, and two independent queries of a stochastic oracle. Our algorithm incorporates a variance reduction mechanism, and leads to a.s. convergence to solutions of a merely pseudo-monotone stochastic variational inequality problem. To the best of our knowledge, this is the first stochastic algorithm achieving this by using only a single projection at each iteration.

Nifty tech tag lists fromĀ Wouter Beeftink