[C60] - On the almost sure convergence of stochastic gradient descent in non-convex problems

P. Mertikopoulos, N. Hallak, A. Kavis, and V. Cevher. In NeurIPS '20: Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020.

Abstract

This paper analyzes the trajectories of stochastic gradient descent (SGD) to help understand the algorithm’s convergence properties in non-convex problems. We first show that the sequence of iterates generated by SGD remains bounded and converges with probability 1 under a very broad range of step-size schedules. Subsequently, going beyond existing positive probability guarantees, we show that SGD avoids strict saddle points/manifolds with probability $1$ for the entire spectrum of step-size policies considered. Finally, we prove that the algorithm’s rate of convergence to Hurwicz minimizers is $\mathcal{O}(1/n^p)$ if the method is employed with a $\mathcal{O}(1/n^p)$ step-size schedule. This provides an important guideline for tuning the algorithm’s step-size as it suggests that a cool-down phase with a vanishing step-size could lead to faster convergence; we demonstrate this heuristic using ResNet architectures on CIFAR.

arXiv link: https://arxiv.org/abs/2006.11144

Nifty tech tag lists fromĀ Wouter Beeftink