Benefits of Randomized First Order Algorithms for Min-Max Optimization

Abstract: Modern data science applications require solving high dimensional optimization problems with large number of data points. Min-max optimization provides a unified framework for many problems in this context ranging from empirical risk minimization in machine learning to medical imaging and nonlinear programming. This talk will present two approaches for using randomization to design simple, practical and adaptive optimization algorithms that improve the best-known complexity guarantees for convex-concave optimization. I will describe first order primal-dual algorithms with random coordinate updates and discuss their complexity guarantees as well as practical adaptivity properties. I will then present an extragradient algorithm with stochastic variance reduction that harnesses the finite-sum min-max structure to obtain sharp complexity bounds.
Date
Location
Low 3051
Speaker: Ahmet Alacaoglu from University of Wisconsin
Back to top