会议议程
4.19 | 与会专家到会 | |||
4.20 | 8:30-9:00 | 领导致欢迎辞,合影 | ||
9:00-9:30 | 动理学方程的自然模型约化 | 李若 | ||
9:30-10:00 | Reduced order modelling and its applications | 肖敦辉 | ||
10:00-10:30 | SAV-based optimization methods for the training in deep learning | 毛志平 | ||
10:30-10:50 | 茶歇 | |||
10:50-11:20 | Overlapping Multiplicative Schwarz Preconditioning for Linear and Nonlinear Systems | 高卫国 | ||
11:20-11:50 | PINNs based reduced basis method | 李敬来 | ||
11:50-12:20 | An adaptive phase-field method for structural topology optimization | 朱升峰 | ||
午餐 | ||||
4.20 | 14:00-14:30 | Thermodynamically consistent deep model reductions via OnsagerNet for deterministic and stochastic systems | 于海军 | |
14:30-15:00 | DeepRTE: Pre-trained Attention-based Neural Network for Radiative Transfer | 马征 | ||
15:00-15:30 | An efficient and statistically accurate Lagrangian data assimilation algorithm with applications to discrete element sea ice models | 付书彬 | ||
15:30-16:00 | 茶歇 | |||
16:00-16:30 | A deep learning method for the Schrödinger eigenvalue problems | 明平兵 | ||
16:30-17:00 | Deep adaptive sampling for surrogate modeling | 唐科军 | ||
会议地点:上海科技大学数学科学研究所报告厅408
会议组委会:蒋诗晓,廖奇峰,明平兵,翟佳羽
Title: An efficient and statistically accurate Lagrangian data assimilation algorithm with applications to discrete element sea ice models
Speaker:付书彬,宁波东方理工大学(暂名)
Abstract: In this talk, an efficient data-driven statistically accurate reduced-order modeling algorithm is developed that significantly accelerates the computational efficiency of Lagrangian data assimilation. The algorithm starts with a Fourier transform of the high-dimensional flow field, which is followed by an effective model reduction that retains only a small subset of the Fourier coefficients corresponding to the energetic modes. Then a linear stochastic model is developed to approximate the nonlinear dynamics of each Fourier coefficient. Effective additive and multiplicative noise processes are incorporated to characterize the modes that exhibit Gaussian and non-Gaussian statistics, respectively. All the parameters in the reduced order system, including the multiplicative noise coefficients, are determined systematically via closed analytic formulae. These linear stochastic models succeed in forecasting the uncertainty and facilitate an extremely rapid data assimilation scheme.
Title: Overlapping Multiplicative Schwarz Preconditioning for Linear and Nonlinear Systems
Speaker:高卫国,复旦大学
Abstract: TBA
Title: PINNs based reduced basis method
Speaker:李敬来,伯明翰大学
Abstract: The reduced basis method is a popular approach for accelerating the numerical solution of parametric PDEs. In the reduced basis method, a key step is to solve a reduced problem, which can be rather computationally intensive. In this work we propose a Physics-Informed Neural Networks (PINNs) based method for solving the reduced problem. Compared to the standard PINNs approach, the underlying network of our method is considerably simpler. Numerical examples are provided to demonstrate the performance of the proposed method.
Title: Can DG go beyond FE in efficiency?
Speaker:李若,北京大学
Abstract:处理高维问题是来自于现实的需求,动理学方程是典型而传统的高维问题之一。我们以动理学方程为案例,以新的提法使得问题虽具有高维表象,但因其解流形的低维结构而获得低维的本质。以此为出发点,我们可以在流形上对动理学方程进行自然的模型约化,获得低维的逼近模型,并可以给出保持原方程优良特质需遵循的规范。我们试图将此抽象理论应用于惯性约束聚变中的辐射输运问题的模型约化。
Title: DeepRTE: Pre-trained Attention-based Neural Network for Radiative Transfer
Speaker:马征, 上海交通大学
Abstract: In this work we proposed a novel neural network approach to solve the steady Radiative Transfer Equation. The Radiative Transfer Equation is a differential-integral equation that describes the transport of radiation in a medium. It has applications in various fields such as neutron transport, atmospheric radiative transfer, heat transfer, and optical imaging. The
proposed DeepRTE approach is based on pre-trained attention-based neural networks and is capable of solving the Radiative Transfer Equation with high accuracy and efficiency. The effectiveness of the proposed approach is demonstrated through numerical experiments.
Title: A deep learning method for the Schrödinger eigenvalue problems
Speaker:明平兵,中国科学院数学于系统科学研究院
Abstract: We present a novel deep learning method for computing eigenvalues of the Schrodinger operator. The proposed approach combines a newly developed loss function with an innovative neural network architecture that incorporates prior knowledge of the problem. These improvements enable the proposed method to handle both
high-dimensional problems and problems posed on irregular bounded domains. We successfully compute up to the first 30 eigenvalues for various Schrodinger operators. We also analyze the generalization error in the framework of Barron space. This is a joint work with Yixiao Guo and Hao Yu.
Title: SAV-based optimization methods for the training in deep learning
Speaker:毛志平,宁波东方理工大学(暂名)
Abstract: The optimization algorithm plays an important role in deep learning and significantly affects the stability and efficiency of the training process, and consequently the accuracy of the neural network approximation. A suitable (initial) learning rate is crucial for the optimization algorithm in deep learning. However, a small learning rate is usually needed to guarantee the convergence of the optimization algorithm, resulting in a slow training process. We develop in this work efficient and energy stable SAV-based optimization methods for the training in deep learning. In particular, we consider the gradient flows arising from deep learning and develop several SAV-based optimization methods, including vanilla SAV, restart SAV, relax SAV as well as elementwise SAV. We also combine the adaptive strategy used in Adam algorithm to improve the accuracy. To illustrate the effectiveness of the proposed methods, we present a number of numerical tests to demonstrate that the SAV-based schemes significantly improve the efficiency and stability of the training as well as the accuracy of the neural network approximation.
Title: Deep adaptive sampling for surrogate modeling
Speaker:唐科军, 北京大学长沙计算与数字经济研究院
Abstract: Using deep learning methods to approximate an unknown function often involves computing an integral in the loss function. The effective way to discretize the high-dimensional integration is Monte Carlo sampling, and the discretization accuracy of the loss function is one of the key points to the solution accuracy. In this talk, we will show how to use adaptive sampling methods for solving high-dimensional PDEs and low-regularity parametric PDEs. In particular, a deep generative model is used to design the adaptive sampling framework. The samples from the deep generative model are consistent with the distribution induced by an error indicator function. Analogous to classical adaptive methods such as the adaptive finite element, the deep generative model acts as an error indicator that guides the refinement of the training set. Compared to the neural network approximation obtained with non-adaptive methods, the deep adaptive sampling algorithms can significantly improve the accuracy, especially for low regularity and high-dimensional problems.
Title: Reduced order modelling and its applications
Speaker:肖敦辉,同济大学
Abstract: Reduced-order modelling (ROM) provides an economical way to construct low-dimensional parametric surrogates for rapid predictions of high-dimensional physical fields. This talk will present a physics-data combined machine learning (PDCML) method for ROM in small-data regimes. To overcome labelled data scarcity, a physics-data combined ROM framework is developed to jointly integrate the physical principle and the small labelled data into feedforward neural networks (FNN) via a step-by-step training scheme. This new PDCML method is tested on a series of nonlinear problems with different numbers of physical variables, and it is also compared with the data-driven ROM and the physics-guided ROM. The results demonstrate that the proposed method provides a cost-effective way for parametric ROM via machine learning, and it possesses good characteristics of high prediction accuracy, strong generalization capability and small data requirement. In this talk, a non-linear non-intrusive ROM based on Auto-encoder and self-attention will be also presented.
Title: Thermodynamically consistent deep model reductions via OnsagerNet for deterministic and stochastic systems
Speaker:于海军, 中国科学院数学于系统科学研究院
Abstract: Discovering hidden low-dimensional dynamical models from provided trajectory data or establishing reduced surrogate models for given high-dimensional PDE systems using deep learning methods is one of the promising topics in computational science and machine learning fields. Broadly speaking, there are two fundamental approaches: unstructured and structured. In the first approach, deep neural networks are directly used to fit the dynamical data, which in the second one, deep networks with special physical structure are used to fit physical data. In this talk, I will briefly introduce our recent attempts that belongs to the second approach to find low-dimensional models by combining deep learning methods and a generalized Onsager principle. The obtained reduced models are mathematically well-posed and physically interpretable. The method is applied to both deterministic and stochastic dynamical systems. Detailed numerical results of different applications will be presented.
Title: An adaptive phase-field method for structural topology optimization
Speaker:朱升峰, 华东师范大学
Abstract: We develop an adaptive algorithm for the efficient numerical solution of the minimum compliance problem in topology optimization. The algorithm employs the phase field approximation and continuous density field. The adaptive procedure is driven by two residual type a posteriori error estimators, one for the state variable and the other for the first-order optimality condition of the objective. The adaptive algorithm is provably convergent. Several numerical simulations are provided to show the distinct features of the algorithm.