The use of neural networks for solving partial differential equations (PDEs) has attracted considerable attention in recent years. In this talk, I will first highlight their advantages over traditional numerical methods, including improved approximation rates and the poential to overcome the curse of dimensionality. I will then discuss the challenges that arise when applying neural networks to PDEs, particularly in training. Because training is inherently a highly nonconvex optimization problem, it can lead to poor local minma with large training errors, especially in complex PDE settings. To address these issues, I will demonstrate how incorporating mathetical insight into the design of training algorithms and network architectures can lead to significant improvements in both accuracy and robustness.
About the Speaker 
Dr. Yahong Yang received his Ph.D. in Mathematics from the Hong Kong University of Science and Technology in 2023. He is currently a Visiting Assistant Professor at Georgia Institute of Technology and was a Postdoctoral Scholar at Pennsylvania State University from 2023 to 2025. His research interests include machine learning theory, mathematical modeling in materials science and biology, and numerical methods for partial differential equations. His current work focuses on the theoretical foundations of deep learning and the development of machine learning–based methods for solving complex PDEs, such as the Allen–Cahn equation, the Gray–Scott model, and problems involving Green’s functions, with particular emphasis on approximation, generalization analysis, and optimization algorithm design.