ML Seminar: Qing Qu (University of Michigan)

CSE DSI Machine Learning seminars will be held Tuesdays 11a.m. - 12 p.m. Central Time in hybrid mode. We hope facilitate face-to-face interactions among faculty, students, and partners from industry, government, and NGOs by hosting some of the seminars in-person. See individual dates for more information.

This week's speaker, Qing Qu (University of Michigan), will be giving a talk titled, "On the Emergence of Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Networks".


Over the past few years, an extensively studied phenomenon in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we first investigate this phenomenon by narrowing our focus to deep linear networks. Through our analysis, we reveal a surprising "law of parsimony" in the learning dynamics when the data possesses low-dimensional structures. Specifically, we show that the evolution of gradient descent starting from orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, even though all weight parameters are updated throughout training. 

This simplicity in learning dynamics could have significant implications for both efficient training and a better understanding of deep networks. First, the analysis enables us to considerably improve training efficiency by taking advantage of the low-dimensional structure in learning dynamics. We can construct smaller, equivalent deep linear networks without sacrificing the benefits associated with the wider counterparts. Moreover, we demonstrate the potential implications for efficient training deep nonlinear networks.

Second, it allows us to better understand deep representation learning by elucidating the progressive feature compression and discrimination from shallow to deep layers. The study paves the foundation for understanding hierarchical representations in deep nonlinear networks.


Qing Qu is an assistant professor in EECS department at the University of Michigan. Prior to that, he was a Moore-Sloan data science fellow at Center for Data Science, New York University, from 2018 to 2020. He received his Ph.D from Columbia University in Electrical Engineering in October 2018. He received his B.Eng. from Tsinghua University in July 2011, and a M.Sc.from the Johns Hopkins University in December 2012, both in Electrical and Computer Engineering. He interned at U.S. Army Research Laboratory in 2012 and Microsoft Research in 2016, respectively. His research interest lies at the intersection of the foundation of data science, machine learning, numerical optimization, and signal/image processing, with a focus on developing efficient nonconvex methods and global optimality guarantees for solving representation learning and nonlinear inverse problems in engineering and imaging sciences. He is the recipient of Best Student Paper Award at SPARS15 (with Ju Sun, John Wright), and the recipient of Microsoft PhD Fellowship in machine learning. He is the recipient of the NSF Career Award in 2022, and Amazon Research Award (AWS AI) in 2023.

Start date
Tuesday, Nov. 7, 2023, 11 a.m.
End date
Tuesday, Nov. 7, 2023, Noon

Keller Hall 3-180 or Zoom