Seminar| Institute of Mathematical Sciences
Time:Friday, October 14th, 2022, 15:30-16:45
Location:Online, Tecent Meeting
Speaker: Jing Li, Zhejiang Lab
Abstract: We propose a new machine learning framework for forward uncertainty quantification and parameter estimation in partial differential equation models using sparse measurements of the parameter field. In our approach, the Gaussian process regression is used to estimate the distribution of the unknown parameter κ, including mean and variance, conditioned on its measurements. In one approach the conditional Karhunen-Loève (KL) expansion of κ and generalized polynomial chaos (gPC) expansion of the state variable u are constructed in terms of the parameter’s conditional mean and the eigenfunctions and eigenvalues of the parameter’s conditional covariance function. In the forward UQ application, the conditional gPC surrogate is used to estimate the mean and variance. In the inverse solution, we use the conditional KL and gPC expansions to find a realization of conditional κ distribution that satisfies an appropriate maximum a posteriori minimization problem. In another approach (PI-CKL-NN), the states u are approximated by deep neural networks (DNNs). The unknown weights in the KL expansions and DNNs are found by minimizing the cost function that enforces the measurements of the states and the differential equation constraint. Regularization is achieved by adding the l2norm of the conditional KL coefficients into the loss function.
Tecent Meeting Room Number: 355-741-802