Panayotis Mertikopoulos
About
Short Bio
Publications
Collaborations
Content tagged with
optimization
[W8] Explicit second-order min-max optimization methods with optimal convergence guarantees
[J43] The rate of convergence of Bregman proximal methods: Local geometry vs. regularity vs. sharpness
[C91] The computational complexity of finding second-order stationary points
[C90] What is the long-run distribution of stochastic gradient descent? A large deviations analysis
[W7] Setwise coordinate descent for dual asynchronous decentralized optimization
[C87] Riemannian stochastic optimization methods avoid strict saddle points
[J37] Distributed stochastic optimization with large delays
[J36] Multi-agent online optimization with delays: Asynchronicity, adaptivity, and optimism
[C81] Pick your neighbor: Local Gauss-Southwell rule for fast asynchronous decentralized optimization
[C78] AdaGrad avoids saddle points
[C77] UnderGrad: A universal black-box optimization method with almost dimension-free convergence rate guarantees
[C75] The dynamics of Riemannian Robbins-Monro algorithms
[J34] Minibatch forward-backward-forward methods for solving stochastic variational inequalities
[C73] Fast routing under uncertainty: Adaptive learning in congestion games with exponential weights
[C71] Sifting through the noise: Universal first-order methods for stochastic variational inequalities
[C70] Adaptive first-order methods revisited: Convex optimization without Lipschitz requirements
[C69] Equilibrium tracking and convergence in dynamic games
[C68] Optimization in open networks via dual averaging
[C67] Adaptive learning in continuous games: Optimal regret bounds and convergence to Nash equilibrium
[C65] The last-iterate convergence rate of optimistic mirror descent in stochastic variational inequalities
[C64] The limits of min-max optimization algorithms: Convergence to spurious non-critical sets
[C62] Regret minimization in stochastic non-convex learning via a proximal-gradient approach
[C61] Adaptive extra-gradient methods for min-max optimization and games
[J31] On the convergence of mirror descent beyond stochastic convex programming
[C60] On the almost sure convergence of stochastic gradient descent in non-convex problems
[C57] Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling
[C56] A new regret analysis for Adam-type algorithms
[C51] Online and stochastic optimization beyond Lipschitz continuity: A Riemannian approach
[D3] Online optimization and learning in games: Theory and applications
[J29] Hessian barrier algorithms for linearly constrained optimization problems
[C50] On the convergence of single-call stochastic extra-gradient methods
[C49] An adaptive mirror-prox algorithm for variational inequalities with singular operators
[C48] Convergent noisy forward-backward-forward algorithms in non-monotone variational inequalities
[C46] Cautious regret minimization: Online optimization with long-term budget constraints
[C44] Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile
[C43] Large-scale network utility maximization: Countering exponential growth with exponentiated gradients
[W3] Online convex optimization and no-regret learning: Algorithms, guarantees and applications
[J26] Stochastic mirror descent dynamics and their convergence in monotone variational inequalities
[J24] On the convergence of gradient-like flows with noisy gradient input
[C39] On the convergence of stochastic forward-backward-forward algorithms with variance reduction
[C37] Distributed asynchronous optimization with unbounded delays: How slow can you go?
[C36] A resource allocation framework for network slicing
[J22] Distributed stochastic optimization via matrix exponential learning
[J21] A continuous-time approach to online optimization
[C32] Stochastic mirror descent in variationally coherent optimization problems
[W2] Boltzmann meets Nash: Energy-efficient routing in optical networks under uncertainty
[J15] A stochastic approximation algorithm for stochastic semidefinite programming
[C24] Distributed learning for resource allocation under uncertainty
[J11] Inertial game dynamics and applications to constrained optimization
[C16] Distributed optimization in multi-user MIMO systems with imperfect and delayed information
[C11] Accelerating population-based search heuristics by adaptive resource allocation
[C8] Matrix exponential learning: Distributed optimization in MIMO systems
Nifty
tech tag lists
fromĀ
Wouter Beeftink