Abstract: Complex nonlinear interplays of multiple scales give rise to many interesting physical phenomena and pose major difficulties for the computer simulation of multiscale PDE models in areas such as reservoir simulation, high frequency scattering and turbulence modeling. Although various neural solvers have been proposed to solve PDEs with fixed parameters, the difficulty to solve multiscale PDEs has already been exhibited by the so-called spectral bias or frequency principle, which shows that DNN-based algorithms often suffer from the ``curse of high-frequency as they are inefficient to learn high-frequency components of multi-scale functions. Furthermore, DNN algorithms demonstrate more potential to learn the input-output mapping (solution operator) of parametric PDEs. In this talk, I will first briefly introduce our work to solve (fixed parameter) multiscale PDEs with multiscale neural networks. Then I will introduce a hierarchical transformer (HT) scheme to efficiently learn the solution operator for multiscale PDEs. We construct a hierarchical architecture with scale adaptive interaction range, such that the features can be computed in a nested manner and with a controllable linear cost. Self-attentions over a hierarchy of levels can be used to encode and decode the multiscale solution space over all scale ranges. In addition, we adopt an empirical $H^1$ loss function to counteract the spectral bias of the neural network approximation for multiscale functions. In the numerical experiments, we demonstrate the superior performance of the HT scheme compared with state-of-the-art (SOTA) methods for representative multiscale problems.
Tecent Meeting Room Number: 598-611-825