On the intrinsic potential and convergence of nonconvex minimax and bi-level optimization

Machine Learning Seminar Series

by

Yi Zhou
Electrical & Computer Engineering
University of Utah

Many emerging machine learning applications are formulated as either minimax or bi-level optimization problems. This includes, for example, problems in adversarial learning and invariant representation learning that impose robustness and invariance on the model via minimax game, and problems in few-shot learning that train bi-level models for accomplishing different tasks. In this talk, we will provide a unified analysis of the popular optimization algorithms used for solving nonconvex minimax and bi-level optimization problems, based on a novel perspective of identifying the intrinsic potential function of these algorithms. In particular, our analysis establishes the model parameter convergence of these algorithms and characterizes the impact of local function geometry on their convergence rates. Our study reveals that, under some conditions, the dynamic of minimax and bi-level optimization algorithms are similar to that of the gradient descent for nonconvex minimization.


Yi Zhou is an Assistant Professor affiliated with the department of ECE at the University of Utah. Before joining the University of Utah in 2019, he received a Ph.D. in Electrical and Computer Engineering in 2018 from The Ohio State University and worked as a post-doctoral fellow at Information Initiative at Duke University. Dr. Zhou's research interests are nonconvex & distributed optimization, reinforcement learning, deep learning, statistical machine learning and signal processing.

Share

Start date
Thursday, May 6, 2021, 11 a.m.
End date
Thursday, May 6, 2021, Noon
Location

Online via zoom - http://z.umn.edu/mlseminar