数学科学研究所
Insitute of Mathematical Science

Seminar: Stochastic Neural Network Methods for Partial Differential Equations

Seminar| Institute of Mathematical Sciences

Time: Wednesday, November 26th, 2025,15:00-16:00

LocationRS408, IMS

Speaker: Fei Wang, Xi’an Jiaotong University

  

Abstract: Traditional numerical methods, characterized by rigorous mathematical theory, high precision, and physical conservation properties, form a reliable foundation for modern scientific computing. Despite their significant achievements, these methods still face inherent limitations, such as difficulties in mesh generation for complex geometries, insufficient expressiveness of global structures, the need for case-by-case system reconstruction under different geometries or boundary conditions, the curse of dimensionality leading to sharply rising computational costs, and shortcomings in data fusion and uncertainty quantification. In recent years, novel computational paradigms based on neural networks have gradually emerged, offering the potential to overcome these bottlenecks due to their powerful representational capabilities. However, traditional training-based neural network methods are constrained by the challenges of nonlinear, non-convex optimization, which limits their accuracy and efficiency.


  

To address these issues, we propose a class of stochastic neural network (RaNN) methods that combine the rigor of traditional numerical schemes with the flexibility of neural networks. This framework encompasses RaNN–Petrov–Galerkin (RaNN–PG), Local Stochastic Neural Network–Discontinuous Galerkin (LRaNN–DG), LRaNN–HDPG, and LRaNN–Finite Difference Methods. Additionally, we introduce an Adaptive Growing Stochastic Neural Network (AG–RaNN) strategy, which leverages prior and posterior information to capture key features of the solution, adaptively determines the distribution of stochastic parameters, and dynamically adjusts the network structure, thereby significantly enhancing approximation capabilities. Furthermore, we explore the role of RaNN in accelerating operator learning, making the training of parameterized partial differential equations more efficient. Numerical results demonstrate that RaNN methods offer advantages such as being mesh-free, structure-preserving, and flexibly approximative, achieving high-precision solutions with fewer degrees of freedom, and naturally extending to high-dimensional and time dependent problems. This research indicates that RaNN provides a highly promising new approach for integrating traditional numerical methods with modern machine learning to achieve efficient solutions for partial differential equations.


地址:上海市浦东新区华夏中路393号
邮编:201210
上海市徐汇区岳阳路319号8号楼
200031(岳阳路校区)