Physics-informed machine learning (PIML) has emerged as a means of solving partial differential equations (PDE) in computational science and engineering (CS&E). In particular, physics-informed neural networks (PINNs) have been introduced to solve PDE problems without data. At the burgeoning of a new field and paradigm for solving PDEs, there is an opportunity to bring in existing methodological knowledge to develop new PINN-based methods. Furthermore, there exist unseen systemic challenges in PINNs, requiring a new perspective in the development of methods that overcome them.
Given the goal of the general advancement of the field of PINNs, multiple novel methods are presented in this work to reduce the cost and improve the accuracy compared to baseline PINNs. First, we investigate the application of multifidelity modeling to PINN architectures. We draw a parallel between the expressivity and optimization of the neural network and the concept of high and low-fidelity approximation. Using this understanding, we construct a high-fidelity emulator given low-fidelity input that improves accuracy at no cost provided the offline cost of constructing the emulator. Second, we investigate the application of metalearning PINN parameter initialization on new tasks. We re-frame existing surrogate model techniques to estimate PINN initializations across the parametric PDE domain for unseen tasks. The predicted initialization improves the accuracy and convergence of the PINN. Third, we investigate the application of temporal causality and domain decomposition for PINNs. We identify a gap in existing methodology and introduce a unified framework for causal sweeping strategies for PINNs and their temporal decompositions. New methods described under this framework are shown to improve accuracy and decrease cost on first-order in-time problems. Finally, Kolmogorov n-width is investigated as a lens under which PINN accuracy bounds can be defined for multitask problems, as well as optimization criteria. This provides insight into the learned neural network basis functions. In summary, this dissertation advances the computational research of PINNs, providing cost and accuracy benefits to a new paradigm of PDE solver.
Posted by: Nathan Galli