N. Liakopoulos, A. Destounis, G. Paschos, A. Spyropoulos, and P. Mertikopoulos. In ICML '19: Proceedings of the 36th International Conference on Machine Learning, 2019.
We study a class of online convex optimization problems with long-term budget constraints that arise naturally as reliability guarantees or total consumption constraints. In this general setting, prior work by Mannor et al. (2009) has shown that achieving no regret is impossible if the functions defining the agent’s budget are chosen by an adversary. To overcome this obstacle, we refine the agent’s regret metric by introducing the notion of a “K-benchmark”, i.e., a comparator which meets the problem’s allotted budget over any window of length K. The impossibility analysis of Mannor et al. (2009) is recovered when K = T; however, for K = o(T), we show that it is possible to minimize regret while still meeting the problem’s long-term budget constraints. We achieve this via an online learning algorithm based on cautious online Lagrangian descent (COLD) for which we derive explicit bounds, in terms of both the incurred regret and the residual budget violations.