On small and large scales in training physics-informed neural networks for partial differential equations

Data Science Seminar

Zhongqiang Zhang (Worcester Polytechnic Institute)

Abstract

Training in physics-informed machine learning is often realized with low-order methods, such as stochastic gradient descent. Such training methods usually lead to better learning of solutions with a small range of scales. Extra treatment is required to train problems with solutions of multiple scales. In this talk, we consider two classes of problems. The first class of problems is high-dimensional Fokker-Planck equations, where the solutions are of small scales but not negligible in regions. We use tensor-neural networks and show how to deal with solutions of small scales but with large gradients. The second class of problems is low-dimensional partial differential equations with small parameters, such as boundary layers. We discuss a two-scale neural network method for the large-gradient issues induced by small parameters. 

Start date
Tuesday, March 26, 2024, 1:25 p.m.
End date
Tuesday, March 26, 2024, 2:25 p.m.
Location

Lind Hall 325 or via Zoom

Zoom registration
 

Share