Past Events
Data Driven Modeling of Unknown Systems with Deep Neural Networks
Tuesday, Oct. 31, 2023, 1:25 p.m. through Tuesday, Oct. 31, 2023, 2:25 p.m.
Lind Hall 325 and Zoom
Data Science Seminar
Dongbin Xiu (The Ohio State University)
Abstract
We present a framework of predictive modeling of unknown systems from measurement data. The method is designed to discover/approximate the unknown evolution operator, i.e., flow map, behind the data. Deep neural network (DNN) is employed to construct such an approximation. Once an accurate DNN model for the evolution operator is constructed, it serves as a predictive model for the unknown system and enables us to conduct system analysis. We demonstrate that flow map learning (FML) approach is applicable for modeling a wide class of problems, including dynamical systems, systems with missing variables and hidden parameters, as well as partial differential equations (PDEs).
The Impact of Linear Constraints in Mean-Variance Optimization
Friday, Oct. 27, 2023, 1:25 p.m. through Friday, Oct. 27, 2023, 2:25 p.m.
Lind Hall 325 or Zoom
Industrial Problems Seminar
Christopher Bemis (X Cubed Capital Management)
Abstract
We study the effect linear constraints have on risk in the context of mean variance optimization (MVO). Jagannathan and Ma (2003) establish an equivalence between certain constrained and unconstrained MVO problems via a modification of the covariance matrix. We extend their results to arbitrary linear constraints and provide alternative interpretations for the effect of constraints on both the input parameters to the problems at hand and why ex-post performance is improved in the constrained setting. In addition, we present a signal modification strategy similar in approach to that of Black-Litterman.
Trading off accuracy for reduced computation in scientific computing
Tuesday, Oct. 24, 2023, 1:25 p.m. through Tuesday, Oct. 24, 2023, 2:25 p.m.
Lind Hall 325 or via Zoom
Data Science Seminar
Alex Gittens (Rensselaer Polytechnic Institute)
Abstract
Classical linear algebraic algorithms guarantee high accuracy in exchange for high computational cost. These costs can be infeasible in modern applications, so over the last two decades, randomized algorithms have been developed that allow a user-specified trade-off between accuracy and computational efficiency when dealing with massive data sets. The intuition is that when dealing with an excess of structured data (e.g., a large matrix which has low numerical rank), one can toss away a large portion of this data, thereby reducing the computational load, without introducing much additional error into the computation. In this talk we look at the design and performance analysis of several numerical linear algebra and machine learning algorithms--- including linear solvers, approximate kernel machines, and tensor low-rank decomposition--- based upon this principle.
Computational mean-field games: from conventional methods to deep generative models
Tuesday, Oct. 17, 2023, 1:25 p.m. through Tuesday, Oct. 17, 2023, 2:25 p.m.
3-180 Keller Hall and Zoom
Data Science Seminar
Jiajia Yu (Duke University)
Abstract
Mean-field games study the behavior of a large number of rational agents in a non-cooperative game. It has wide applications in various fields. But it is not easy to solve the mean-field game numerically because of its complicated structure.
In the first part of my talk, I will present an efficient and flexible algorithm for dynamic mean-field games. The algorithm is based on an accelerated proximal gradient method. It consists of an easy-to-implement gradient descent step and a projection step equivalent to solving an elliptic equation. We also extend the setting of mean-field games and the algorithm to manifolds. In the second part of my talk, I will bridge mean-field games with a deep generative model which is called normalizing flows. The connection gives a computational approach for high-dimensional mean-field games and improves the training of the generative model.
The first part is based on joint works with Rongjie Lai (Purdue), Wuchen Li (UofSC) and Stanley Osher (UCLA). The second part is based on a joint work with Han Huang (RPI), Rongjie Lai (Purdue) and Jie Chen (IBM).
Distinct spatiotemporal tumor-immune ecologies define therapeutic response in NSCLC patients
Industrial Problems Seminar
Sandhya Prabhakaran (Moffitt Cancer Centre)
Abstract
The talk will be geared towards a general audience. The goal of this talk is to explain importance of data, and the many ways data can be analyzed to benefit patient care. In this talk, I will focus on Non-small cell lung cancer (NSCLC), the patient data we obtained, the computational approaches used, and the potential biomarkers we identified in this process.
How much can one learn a PDE from its solution?
Tuesday, Oct. 10, 2023, 1:25 p.m. through Tuesday, Oct. 10, 2023, 2:25 p.m.
Lind Hall 325 and Zoom
Data Science Seminar
Yimin Zhong (Auburn University)
Abstract
In this work we study a few basic questions for PDE learning from observed solution data. Using various types of PDEs, we show 1) how the approximate dimension (richness) of the data space spanned by all snapshots along a solution trajectory depends on the differential operator and initial data, and 2) identifiability of a differential operator from solution data on local patches. Then we propose a consistent and sparse local regression method (CaSLR) for general PDE identification. Our method is data driven and requires minimal amount of local measurements in space and time from a single solution trajectory by enforcing global consistency and sparsity.
Navigating Interdisciplinary Research as a Mathematician
Friday, Oct. 6, 2023, 1:25 p.m. through Friday, Oct. 6, 2023, 2:25 p.m.
Lind Hall 325 or virtually by Zoom
Industrial Problems Seminar
Julie Mitchell (Oak Ridge National Laboratory)
Abstract
Being effective in industrial and team science settings requires the ability to work across disciplines. In this talk, I will reflect on how to be successful working across disciplines and what types of opportunities exist for mathematicians working at national laboratories. I will also reflect on past projects I’ve pursued, which include high-performance computing and machine learning approaches to the understanding of macromolecular structure and binding.
Exploiting geometric structure in matrix-valued optimization
Tuesday, Oct. 3, 2023, 1:25 p.m. through Tuesday, Oct. 3, 2023, 2:25 p.m.
Lind Hall 325 and Zoom
Data Science Seminar
Melanie Weber (Harvard University)
Abstract
Matrix-valued optimization tasks arise in many machine learning applications. Often, exploiting non-Euclidean structure in such problems can give rise to algorithms that are computationally superior to standard nonlinear programming approaches. In this talk, we consider the problem of optimizing a function on a (Riemannian) manifold subject to convex constraints. Several classical problems can be phrased as constrained optimization on matrix manifolds. This includes barycenter problems, as well as the computation of Brascamp-Lieb constants. The latter is of central importance in many areas of mathematics and computer science through connections to maximum likelihood estimators in Gaussian models, Tyler’s M-estimator of scatter matrices and operator scaling. We introduce Riemannian Frank-Wolfe methods, a class of first-order methods for solving constrained optimization problems on manifolds and present a global, non-asymptotic convergence analysis. We further discuss a class of CCCP-style algorithms for Riemannian “difference of convex” functions and explore connections to constrained optimization. We complement our discussion with applications to the two problems described above. Based on joint work with Suvrit Sra.
What makes an algorithm industrial strength?
Friday, Sept. 29, 2023, 1:25 p.m. through Friday, Sept. 29, 2023, 2:25 p.m.
Lind Hall 325 or Zoom
Industrial Problems Seminar
Thomas Grandine (University of Washington)
Abstract
In this talk, I will discuss the details of two algorithms for parametrizing planar curves in an industrial design context. The first algorithm, developed in an academic setting by world class researchers, solves the problem posed by the researchers in a very satisfying and elegant way. Yet that algorithm, elegant though it may be, turns out to be ineffective in a real world engineering environment. The second algorithm is an extension of the first that eliminates the issues that caused it to be inadequate for industrial use.
Information Gamma calculus: Convexity analysis for stochastic differential equations
Tuesday, Sept. 26, 2023, 1:25 p.m. through Tuesday, Sept. 26, 2023, 2:25 p.m.
Lind Hall 325 or Zoom
Data Science Seminar
Wuchen Li (University of South Carolina)
Abstract
We study the Lyapunov convergence analysis for degenerate and non-reversible stochastic differential equations (SDEs). We apply the Lyapunov method to the Fokker-Planck equation, in which the Lyapunov functional is chosen as a weighted relative Fisher information functional. We derive a structure condition and formulate the Lypapunov constant explicitly. Given the positive Lypapunov constant, we prove the exponential convergence result for the probability density function towards its invariant distribution in the L1 norm. Several examples are presented: underdamped Langevin dynamics with variable diffusion matrices, quantum SDEs in Lie groups (Heisenberg group, displacement group, and Martinet sub-Riemannian structure), three oscillator chain models with nearest-neighbor couplings, and underdamped mean field Langevin dynamics (weakly self-consistent Vlasov-Fokker-Planck equations).