ML Seminar: The power of adaptivity in representation learning

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Thursday from 11 a.m. - 12 p.m. during the Fall 2022 semester.

This week's speaker, Aryan Mokhtari (UT Austin), will be giving a talk titled "The power of adaptivity in representation learning".

Abstract

From meta-learning to federated learning

A central problem in machine learning is as follows: How should we train models using data generated from a collection of clients/environments, if we know that these models will be deployed in a new and unseen environment?

In the setting of few-shot learning, two prominent approaches are: (a) develop a modeling framework that is “primed” to adapt, such as Model Adaptive Meta Learning (MAML), or (b) develop a common model using federated learning (such as FedAvg), and then fine tune the model for the deployment environment. We study both these approaches in the multi-task linear representation setting. We show that the reason behind generalizability of the models in new environments trained through either of these approaches is that the dynamics of training induces the models to evolve toward the common data representation among the clients’ tasks.

In both cases, the structure of the bi-level update at each iteration (an inner and outer update with MAML, and a local and global update with FedAvg) holds the key — the diversity among client data distributions are exploited via inner/local updates, and induces the outer/global updates to bring the representation closer to the ground-truth. In both these settings, these are the first results that formally show representation learning, and derive exponentially fast convergence to the ground-truth representation. Based on joint work with Liam Collins, Hamed Hassani, Sewoong Oh, and Sanjay Shakkottai.

Biography

Aryan Mokhtari is an Assistant Professor in the Electrical and Computer Engineering Department of the University of Texas at Austin (UT Austin) where he holds the Fellow of Texas Instruments/Kilby. Before joining UT Austin, he was a Postdoctoral Associate in the Laboratory for Information and Decision Systems (LIDS) at MIT.  Prior to that, he was a Research Fellow at the Simons Institute for the program on “Bridging Continuous and Discrete Optimization”. He received his Ph.D. in electrical and systems engineering from the University of Pennsylvania (Penn). He is the recipient of the Army Research Office (ARO) Early Career Program Award, the Simons-Berkeley Research Fellowship, and Penn’s Joseph and Rosaline Wolf Award for Best Doctoral Dissertation.

Start date
Wednesday, Oct. 19, 2022, 11 a.m.
End date
Wednesday, Oct. 19, 2022, Noon
Location

3-180 Keller Hall and via Zoom

Share