Past Events

Multiscale analysis of manifold-valued curves

Nir Sharon (Tel Aviv University)

You may attend the talk either in person in Walter 402 or register via Zoom. Registration is required to access the Zoom webinar.

A multiscale transform is a standard signal and image processing tool that enables a mathematically hierarchical analysis of objects. Customarily, the first scale corresponds to a coarse representation, and as scales increase, so is the refinement level of the entity we represent. This multiscale approach introduces a dynamic and flexible framework with many computational and approximation advantages. In this talk, we introduce a multiscale analysis that aims to represent manifold-valued curves. First, we will present the settings and our multiscale construction. Then, we will show some of the theoretical properties of our multiscale representation. Finally, we will conclude with several numerical examples illustrating how to apply our multiscale method for various data processing techniques.

Flexible multi-output multifidelity uncertainty quantification via MLBLUE

Matteo Croci (The University of Texas at Austin)

You may attend the talk either in person in Walter 402 or register via Zoom. Registration is required to access the Zoom webinar.

A central task in forward uncertainty quantification (UQ) is estimating the expectation of one or more quantities of interest (QoIs). In computational engineering UQ problems often involve multiple QoIs, and extremely heterogeneous models, both in terms of how they are constructed (varying grids, equations, or dimensions, different physics, surrogate and reduced-order models...) and in terms of their input-output structure (different models might have different uncertain inputs and yield different QoIs). In this complex scenario it is crucial to design estimators that are as flexible and as efficient as possible.

Multilevel (or multifidelity) Monte Carlo (MLMC) methods are often the go-to methods for estimating expectations as they are able to exploit the correlations between models to significantly reduce the estimation cost. However, multi-output strategies in MLMC methods are either sub-optimal, or non-existent.

In this talk we focus on multilevel best linear unbiased estimators (MLBLUEs, Schaden and Ullmann, SIAM/ASA JUQ, 2021). MLBLUEs are extremely flexible and have the appealing property of being provably optimal among all multilevel linear unbiased estimators, making them, in our opinion, one of the most powerful MLMC methods available in the literature. Nevertheless, MLBLUEs have two limitations: 1) their setup requires solving their model selection and sample allocation problem (MOSAP), which is a non-trivial nonlinear optimization procedure, and 2) they can only work with one scalar QoI at a time.

In this talk we show how the true potential of MLBLUEs can be unlocked:

  1. We present a new formulation of their MOSAP that can be solved almost as easily and efficiently as a linear program.
  2. We extend MLBLUEs to the multi- and infinite-dimensional output case.
  3. We provide multi-output MLBUE MOSAP formulations that can be solved efficiently and consistently with widely available optimization software.

We show that the new multi-output MLBLUEs can be setup very efficiently and that they significantly outperform existing MLMC methods in practical problems with heterogeneous model structure.

Matteo Croci is a postdoctoral researcher at the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin working with Karen E. Willcox and Robert D. (Bob) Moser. Before moving to Austin in late 2022, Matteo worked for two years as a postdoctoral researcher in the Mathematical Institute at the University of Oxford (UK) under the supervision of Michael B. (Mike) Giles. Matteo obtained his PhD from the University of Oxford (UK) in March 2020 under the supervision of Patrick E. Farrell, Michael B. (Mike) Giles, and in collaboration with Marie E. Rognes from Simula Research Laboratory (Oslo, Norway). Matteo has a MSc in Mathematical Modelling and Scientific Computing from the University of Oxford (UK), and a BSc in Mathematical Engineering from the Politecnico of Milan (Italy).

Matteo’s research has always been interdisciplinary, working at the interface between different fields in applied mathematics and computational engineering. During his PhD, he developed numerical methods for uncertainty quantification, including multilevel Monte Carlo methods, finite element methods for the solution of partial differential equations (PDEs) with random coefficients, and stochastic modelling techniques using Gaussian fields. He applied these techniques to design, validate, and solve different models for brain solute movement. He also developed an optimization method for finding multiple solutions of semismooth equations, variational inequalities and constrained optimization problems. In his years as a postdoc, Matteo has become an expert in reduced- and mixed-precision (RP and MP) computing, in particular in the development of RP/MP methods for the numerical solution of PDEs, including RP finite difference and finite element methods, and MP time stepping methods.

Matteo won the Charles Broyden Prize for the best paper published in Optimization Methods and Software in 2020.

Simplicity Bias in Deep Learning

Prateek Jain (Google Inc.)

While deep neural networks have achieved large gains in performance on benchmark datasets, their performance often degrades drastically with changes in data distribution encountered during real-world deployment. In this work, through systematic experiments and theoretical analysis, we attempt to understand the key reasons behind such brittleness of neural networks in real-world settings.

More concretely, we demonstrate through empirical+theoretical studies that (i) neural network training exhibits "simplicity bias" (SB), where the models learn only the simplest discriminative features and (ii) SB is one of the key reasons behind non-robustness of neural networks. We will then briefly outline some of our (unsuccessful) attempts so far on fixing SB in neural networks illustrating why this is an exciting but challenging problem.

View recording

The Back-And-Forth Method For Wasserstein Gradient Flows

Data Science Seminar

Wonjun Lee (University of Minnesota, Twin Cities)

You may attend the talk either in person in Walter 402 or register via Zoom. Registration is required to access the Zoom webinar.

We present a method to efficiently compute Wasserstein gradient flows. Our approach is based on a generalization of the back-and-forth method (BFM) introduced by Jacobs and Leger to solve optimal transport problems. We evolve the gradient flow by solving the dual problem to the JKO scheme. In general, the dual problem is much better behaved than the primal problem. This allows us to efficiently run large scale gradient flows simulations for a large class of internal energies including singular and non-convex energies.

Joint work with Matt Jacobs (Purdue University) and Flavien Leger (INRIA Paris)

View recording

Speculations

Gunnar Carlsson (Stanford University)

Slides

I would like to talk about the interaction of traditional algebraic topology and homotopy theory with applied topology, and specifically describe specifically some opportunities for better integration of "higher tech" techniques into applications.

Approximations to Classifying Spaces from Algebras

Ben Williams (University of British Columbia)

If A is a finite-dimensional algebra with automorphism group G, then varieties of generating r-tuples of elements in A, considered up to G-action, produce a sequence of varieties B(r) approximating the classifying space BG. I will explain how this construction generalizes certain well-known examples such as Grassmannians and configuration spaces. Then I will discuss the spaces B(r), and how their topology can be used to produce examples of algebras of various kinds requiring many generators. This talk is based on joint work with Uriya First and Zinovy Reichstein.

Gromov-Hausdorff distances, Borsuk-Ulam theorems, and Vietoris-Rips Complexes

Henry Adams (Colorado State University)

Slides

The Gromov-Hausdorff distance between two metric spaces is an important tool in geometry, but it is difficult to compute. For example, the Gromov-Hausdorff distance between unit spheres of different dimensions is unknown in nearly all cases. I will introduce recent work by Lim, Mémoli, and Smith that finds the exact Gromov-Hausdorff distances between S^1, S^2, and S^3, and that lower bounds the Gromov-Hausdorff distance between any two spheres using Borsuk-Ulam theorems. We improve some of these lower bounds by connecting this story to Vietoris-Rips complexes, providing new generalizations of the Borsuk-Ulam theorem. This is joint work in a polymath-style project with many people, most of whom are currently or formerly at Colorado State, Ohio State, Carnegie Mellon, or Freie Universität Berlin.

Equivariant methods in chromatic homotopy theory

XiaoLin (Danny) Shi (University of Chicago)

Slides

I will talk about equivariant homotopy theory and its role in the proof of the Segal conjecture and the Kervaire invariant one problem. Then, I will talk about chromatic homotopy theory and its role in studying the stable homotopy groups of spheres. These newly established techniques allow one to use equivariant machinery to attack chromatic computations that were long considered unapproachable.

Vector bundles for data alignment and dimensionality reduction

Jose Perea (Northeastern University)

Slides

A vector bundle can be thought of as a family of vector spaces parametrized by a fixed topological space. Vector bundles have rich structure, and arise naturally when trying to solve synchronization problems in data science. I will show in this talk how the classical machinery (e.g., classifying maps, characteristic classes, etc) can be adapted to the world of algorithms and noisy data, as well as the insights one can gain. In particular, I will describe a class of topology-preserving dimensionality reduction problems, whose solution reduces to embedding the total space of a particular data bundle. Applications to computational chemistry and dynamical systems will also be presented.

Persistent homology and its fibre (Remotely)

Ulrike Tillmann (University of Oxford)

Persistent homology is a main tool in topological data analysis. So it is natural to ask how strong this quantifier is and how much information is lost. There are many ways to ask this question. Here we will concentrate on the case of level set filtrations on simplicial sets. Already the example of a triangle yields a rich structure with the Möbius band showing up as one of the fibres. Our analysis forces us to look at the persistence map with fresh eyes.

The talk will be based on joint work with Jacob Leygonie.