I am a tenured researcher (chargé de recherche – CRCN) at the French National Center for Scientific Research (CNRS) working with the POLARIS team at the Laboratoire d’Informatique de Grenoble. My research interests currently lie at the interface of learning, optimization, game theory, and their applications to network science, machine learning, and operations research.
As an undergrad, I majored in physics at the University of Athens. After graduating in 2003, I enrolled in the graduate program of the Mathematics Department of Brown University. While there, I worked on differential geometry with George Daskalopoulos and I got my M.Sc. and M.Phil. in Mathematics in 2005 and 2006 respectively.
My interests subsequently shifted to applied mathematics and theoretical computer science, so I returned to the University of Athens where I started my PhD with Aris Moustakas. During my PhD, I worked on the applications of game theory to wireless networks and I completed my thesis on “Stochastic perturbations in game theory and applications to networks” in 2010. Subsequently, I spent 2010–2011 as a post-doc at the École Polytechnique in Paris, working on game theory and learning with Rida Laraki.
Since 2011, I have been a tenured CNRS researcher at the Laboratoire d’Informatique de Grenoble. Over the years, I have also held a number of visiting positions at the LUISS University of Rome (fall 2016), UC Berkeley (spring 2018), and EPFL (fall 2019).
In 2019, I completed my Habilitation à Diriger des Recherches (HDR) on “Online optimization and learning in games: Theory and Applications”. If you are curious, you can also find here the transcript of my public defense and the referee reports by Jérôme Bolte, Nicolò Cesa-Bianchi, and Sylvain Sorin (to whom I am deeply indebted for their time).
My recent work revolves around game theory, online optimization, and their applications to operations research, machine learning and network theory… and I’m still as liable as ever to drop what I’m doing if presented with a cute little problem!
Figure: Convergence of no-regret learning to strict Nash equilibria vs. avoidance of mixed Nash equilibria (click plot to replay). The theory can be found here.