On efficient, approximate sampling for high dimensional scientific computing

Data Science Seminar

Yifan Chen
New York University

Abstract 

Estimating and sampling from high-dimensional probability distributions is a fundamental yet challenging task in scientific computing and machine learning. Approximate, biased algorithms appear empirically more efficient and scalable than unbiased, exact methods. This talk presents novel theoretical results showing that the biased unadjusted Langevin algorithm can achieve nearly dimension-independent complexity when the quantity of interest depends only on low-dimensional marginals, contrasting with unbiased Metropolis-adjusted schemes. We also explore other dynamics of probability distributions and their biased numerical approximations, especially for sampling Bayesian posteriors in inverse problems. These approaches leverage gradient flow structures, variational inference, Kalman methodology, as well as recent advances in generative modeling, all supported by theoretical foundations. The efficacy of these approximate sampling methods for probabilistic inference is demonstrated through several large-scale scientific applications.

Start date
Tuesday, Oct. 8, 2024, 1:25 p.m.
End date
Tuesday, Oct. 8, 2024, 2:25 p.m.
Location

Lind Hall 325 or via Zoom

Zoom registration

Share