Past Events

Free Boundary Problems on Lattices

Charles Smart (Yale University)

Small water droplets on patterned surfaces can form interesting shapes. I will discuss a rigorous analysis of a simple finite difference model that explains these shapes. This will include a basic introduction to free boundary problems on lattices. This is joint work with Feldman.

Simplifying Federated Learning Jobs With Flame

Myungjin Lee (Cisco)

Federated machine learning (FL) is gaining a lot of traction across research communities and industries. FL allows machine learning (ML) model training without sharing data across different parties, thus natively supporting data privacy. However, designing and executing FL jobs is not an easy task today. Flame is an open-source project that aims to ease the composition of FL jobs and the management of their lifecycle across different environments. Towards those ends, Flame is architected to be open and extensible from its inception. This talk will present an overview of the project and a demo on how the Flame system works in a Kubernetes environment.

Myungjin Lee is a Senior Researcher at Cisco's Emerging Technologies and Incubation (ET&I). He leads research on systems for edge computing. His current focus is on federated learning and its use cases at the edge. He is passionate about building software for distributed systems and computer networks.

Prior to Cisco, he worked at Salesforce as a software engineer, where he led a secure cross-datacenter communication project. He was also an Assistant Professor at the University of Edinburgh, UK, where he led research activities around systems and networks including datacenter networks, network telemetry, SDN, etc. 

A Distributed Linear Solver via the Kaczmarz Algorithm

Eric Weber (Iowa State University)

Abstract: The Kaczmarz algorithm is a method for solving linear systems of equations that was introduced in 1937.  The algorithm is a powerful tool with many applications in signal processing and data science that has enjoyed a resurgence of interest in recent years.  We'll discuss some of the history of the Kaczmarz algorithm as well as describe some of the recent interest and applications.  We'll then discuss how the algorithm can be used as a consensus method to process data in a distributed environment.

Dr. Eric Weber holds a Ph.D. in Mathematics from the University of Colorado.  His research interests include harmonic analysis, approximation theory and data science.  Past research includes developing novel wavelet transforms for image processing, and reproducing kernel methods for the harmonic analysis of fractals.  Current research projects include the development of new algorithms for processing distributed spatiotemporal datasets; extending alternating projection methods for optimization in non-Euclidean geometries; using harmonic analysis techniques for understanding the approximation properties of neural networks; and developing machine learning techniques to improve the diagnosis of severe wind occurrences.

A Characteristics-based Approach to Computing Tukey Depths

Martin Molina-Fructuoso (North Carolina State University)

Registration is required to access the Zoom webinar. Martin will also be in person in 402 Walter.

Statistical depths extend the concepts of quantiles and medians to multidimensional data and can be useful to establish a ranking order within data clusters. The Tukey depth is one classical geometric construction of a highly robust statistical depth that has deep connections with convex geometry. Finding the Tukey depth for general measures is a computationally expensive problem, particularly in high dimensions.

In recent work (in collaboration with Ryan Murray) we have shown a link between the Tukey depth of measures with some degree of regularity and a partial differential equation of the Hamilton-Jacobi type. This talk will discuss a strategy based on the characteristics of the differential equation that intends to use this connection to calculate Tukey depths. This approach is inspired by other recent work which attempts to compute solutions to eikonal equations in high dimensions using characteristic-based methods for special classes of initial data.

Martin Molina-Fructuoso graduated from the University of Maryland, College Park with a PhD in Applied Mathematics advised by Profs. Antoine Mellet and Pierre-Emmanuel Jabin. He then joined North Carolina State University as a Postdoctoral Research Scholar where he worked with Prof. Ryan Murray. His interests lie in PDE-based variational methods for problems related to machine learning and in optimal transportation and its applications.

How Well Can We Generalize Nonlinear Learning Models in High Dimensions??

Inbar Seroussi (Weizmann Institute of Science)

Modern learning algorithms such as deep neural networks operate in regimes that defy the traditional statistical learning theory. Neural networks architectures often contain more parameters than training samples. Despite their huge complexity, the generalization error achieved on real data is small. In this talk, we aim to study the generalization properties of algorithms in high dimensions. We first show that algorithms in high dimensions require a small bias for good generalization. We show that this is indeed the case for deep neural networks in the over-parametrized regime. We, then, provide lower bounds on the generalization error in various settings for any algorithm. We calculate such bounds using random matrix theory (RMT). We will review the connection between deep neural networks and RMT and existing results. These bounds are particularly useful when the analytic evaluation of standard performance bounds is not possible due to the complexity and nonlinearity of the model. The bounds can serve as a benchmark for testing performance and optimizing the design of actual learning algorithms. Joint work with Ofer Zeitouni, more information in

Inbar Seroussi is a postdoctoral fellow in the mathematics department at the Weizmann Institute of Science, hosted by Prof. Ofer Zeitouni. Previously, she completed her Ph.D. in the applied mathematics department at Tel-Aviv University under the supervision of Prof. Nir Sochen. Her research interest includes modeling of complex and random systems in high dimensions with application to modern machine learning, physics and medical imaging. She develops and uses advanced tools drawn from statistical physics, stochastic calculus, and random matrix theory.

Data Science in Business vs. Academia

Philippe Barbe (Paramount)

This talk discusses similarities and differences between doing data science in academic and business environment. What are the relevant main differences between these environments? Why are the problem of different complexities? What is helpful to know? It builds on my years of experience doing both. All questions are welcome.

Philippe Barbe, PhD, is Senior Vice President of Content Data Science at Paramount (formerly ViacomCBS). In this role Philippe is responsible for data science modeling to inform content exploitation decisions across Paramount businesses. His team builds predictive models that support highly critical multi-million dollar content-related decisions in collaboration with many data science and research groups across Paramount.

Philippe received a PhD in mathematics and statistics from University Pierre et Marie Curie in Paris, France (currently Sorbonne University) and degree in management and government from ENSAE. He worked for over 20 years at the CNRS, as mathematician specialized in data science and related fields. He authored or co-authored 5 books and numerous scientific papers. He has been an invited professor in many universities worldwide, including Yale and GeorgiaTech in the US. He has been working in the media and entertainment industry since 2015.

Method of Moments: From Sample Complexity to Efficient Implicit Computations

Joao Pereira (The University of Texas at Austin)

In this talk, I focus on the multivariate method of moments for parameter estimation. First from a theoretical standpoint, we show that in problems where the noise is high, the number of observations necessary to estimate parameters is dictated by the moments of the distribution. Second from a computational standpoint, we address the curse of dimensionality: the d-th moment of an n-dimensional random variable is a tensor with nd entries. For Gaussian Mixture Models (GMMs), we develop numerical methods for implicit computations with the empirical moment tensors. This reduces the computational and storage costs, and opens the door to the competitiveness of the method of moments as compared to expectation maximization methods. Time permitting, we connect these results to symmetric CP tensor decomposition and sketch a recent algorithm which is faster than the state-of-the-art and comes with guarantees. Collaborators include Joe Kileel (UT Austin), Tamara Kolda ( and Timo Klock (Deeptech).

João is a postdoc in the Oden Institute at UT Austin, working with Joe Kileel and Rachel Ward. Previously, he was a postdoc at Duke University, working with Vahid Tarokh, and obtained is Ph.D. degree in Applied Mathematics at Princeton University, advised by Amit Singer and Emmanuel Abbe. This summer, he will join IMPA, in Rio de Janeiro, Brazil, as an assistant professor. He is broadly interested in tensor decompositions, information theory and applied mathematics.

Creating Value in PE Using Advanced Analytics

Erik Einset (Global Infrastructures Partners)

Value creation in private equity investment portfolios is fundamental to delivering results for PE customers. Our focus is in the energy and transportation sectors, and by having deep understanding of how these industries work, we explore applications where advanced analytics and better use of data can create more efficient operations and growth, which translates into increased earnings and value. We will discuss how value is created and some specific use cases where we believe there are opportunities to apply advanced analytics.

Erik has over 30 years of experience in various engineering and leadership roles, including 17 years at GE in R&D, product development, process improvement, technical sales, and management.  Since 2008, he has been a member of the Business Improvement team at Global Infrastructure Partners, working in a variety of infrastructure businesses in the energy and transportation sectors.  Erik is the author of 6 patents and numerous technical publications, and holds Chemical Engineering degrees from Cornell University (BS) and the University of Minnesota (PhD).

Relaxing Gaussian Assumptions in High Dimensional Statistical Procedures

Larry Goldstein (University of Southern California)

The assumption that high dimensional data is Gaussian is pervasive in many statistical procedures, due not only to its tail decay, but also to the level of analytic tractability this special distribution provides. We explore the relaxation of the Gaussian assumption in Single Index models and Shrinkage estimation using two tools that originate in Stein’s method: Stein kernels, and the zero bias transform. Taking this approach leads to measures of discrepancy from the Gaussian that arise naturally from the nature of the procedures considered, and result in performance bounds in contexts not restricted to the Gaussian. The resulting bounds are tight in the sense that they include an additional term that reflects the cost of deviation from the Gaussian, and vanish for the Gaussian, thus recovering this particular special case.

Joint work with: Xiaohan Wei, Max Fathi, Gesine Reinert, and Adrien Samaurd

Larry Goldstein received his PhD in Mathematics from the University of California, San Diego in 1984, and is currently Professor in the department of Mathematics at the University of Southern California in Los Angeles. His main area of study is the use of Stein's method for distributional approximation and its applications in statistics, and he also has interests in concentration inequalities, sequential analysis and sampling schemes in epidemiology.

Multi-Agent Autonomy and Beyond: A Mathematician’s Life at GDMS

Ben Strasser (General Dynamics Mission Systems)

Multi-agent autonomy is a broad field touching a wide variety of topics, including control theory, hybrid system verification, game theory, reinforcement learning, information theory, and network optimization. Agents must carefully use limited computational resources to perform complex and collaborative tasks while contending with both in-team information imbalances and non-collaborating agents. This talk provides a high-level overview of the multi-agent autonomy problem space and identifies several practical and theoretical challenges we face. I discuss recent work in multi-agent autonomy and my experience as a mathematician at GDMS.  I recommend this talk for any mathematics students considering a career in industry, as well as all parties with interest in problems related to multi-agent autonomy.