ML Seminar: Constrained Continuous Optimization with First-Order Methods

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2024 semester.

This week's speaker, Haihao (Sean) Lu (University of Chicago), will be giving a talk titled "Constrained Continuous Optimization with First-Order Methods".

Abstract

In this talk, I will talk about the recent ongoing trend of research on new first-order methods for scaling up and speeding up constrained continuous optimization. Constrained continuous optimization (CCO), including linear programming (LP), quadratic programming (QP), second-order cone programming (SOCP), semi-definite programming (SDP), nonlinear programming (NLP), etc, is a fundamental tool in operations research with wide applications in practice. The state-of-the-art solvers for CCO are mature and reliable at delivering accurate solutions. However, these methods do not scale up with modern computational resources such as GPUs and distributed computing. The computational bottleneck of these methods is matrix factorization, which usually requires significant memory usage and cannot be directly applied with modern computing resources. In contrast, first-order methods (FOMs) only require matrix-vector multiplications, which work well on these modern computing infrastructures and have massively accelerated machine learning training during the last 15 years. This ongoing line of research aims to scale up and speed up CCO by using FOMs and modern computational resources, i.e., distributed computing and/or GPUs. With an example of LP, I’ll discuss how we can achieve this by explaining: (i) the intuitions about designing FOMs for LP; (ii) theoretical results, including complexity theory, condition number theory, infeasibility detection, and how theory can lead to better computation and better understanding of the algorithm’s performance; (iii) numerical results of the proposed algorithm on large instances and modern computing architectures; (iv) large-scale applications. After that, I’ll also talk about how to extend it to QP.

Biography

Haihao (Sean) Lu is an assistant professor of Operations Management at the University of Chicago Booth School of Business. His research interests are extending the computational and mathematical boundaries of methods for solving large-scale optimization problems in data science, machine learning, and operations research. Before joining Booth, he was a faculty visitor at Google Research's large-scale optimization team. He obtained his Ph.D. in Operations Research and Mathematics at MIT in 2019. His research has been recognized by a few research awards, including the INFORMS Optimization Society Young Researcher Prize, the INFORMS Revenue Management and Pricing Section Prize, and the Michael H. Rothkopf Junior Researcher Paper Prize (first place).

Start date
Tuesday, April 30, 2024, 11 a.m.
End date
Tuesday, April 30, 2024, Noon
Location

Keller Hall 3-180 and via Zoom.

Share