Generative Machine Learning Models for Uncertainty Quantification
Data Science Seminar
Guannan Zhang (Oak Ridge National Laboratory (ORNL))
Abstract
Generative machine learning models, including variational auto-encoders (VAE), normalizing flows (NF), generative adversarial networks (GANs), diffusion models, have dramatically improved the quality and realism of generated content, whether it's images, text, or audio. In science and engineering, generative models can be used as powerful tools for probability density estimation or high-dimensional sampling that critical capabilities in uncertainty quantification (UQ), e.g., Bayesian inference for parameter estimation. Studies on generative models for image/audio synthesis focus on improving the quality of individual sample, which often make the generative models complicated and difficult to train. On the other hand, UQ tasks usually focus on accurate approximation of statistics of interest without worrying about the quality of any individual sample, so direct application of existing generative models to UQ tasks may lead to inaccurate approximation or unstable training process. To alleviate those challenges, we developed several new generative models for various UQ tasks, including diffusion-model-assisted supervised learning of generative models, and a score-based nonlinear filter for recursive Bayesian inference. We will demonstrate the effectiveness of those methods in various UQ tasks including density estimation, learning stochastic dynamical systems, and data assimilation for surface quasi-geostrophic turbulence.