[W3] - Online convex optimization and no-regret learning: Algorithms, guarantees and applications

E. V. Belmega, P. Mertikopoulos, R. Negrel, and L. Sanguinetti. Working paper.

Abstract

Spurred by the enthusiasm surrounding the “Big Data” paradigm, the mathematical and algorithmic tools of online optimization have found widespread use in problems where the trade-off between data exploration and exploitation plays a predominant role. This trade-off is of particular importance to several branches and applications of signal processing, such as data mining, statistical inference, multimedia indexing and wireless communications (to name but a few). With this in mind, the aim of this tutorial paper is to provide a gentle introduction to online optimization and learning algorithms that are asymptotically optimal in hindsight - i.e., they approach the performance of a virtual algorithm with unlimited computational power and full knowledge of the future, a property known as no-regret. Particular attention is devoted to identifying the algorithms' theoretical performance guarantees and to establish links with classic optimization paradigms (both static and stochastic). To allow a better understanding of this toolbox, we provide several examples throughout the tutorial ranging from metric learning to wireless resource allocation problems.

arXiv link: https://arxiv.org/abs/1804.04529

Nifty tech tag lists fromĀ Wouter Beeftink