Abstract: Machine learning has increasingly influenced the development of scientific computing. In this talk, I will share some recent experiences on how classical analysis can help understand machine learning algorithms. The first example is online learning, where ODEs and SDEs can help explain the optimal regret bounds concisely. In the second example, a perturbative analysis clarifies why sometimes line spectrum estimation algorithms exhibit a super-convergence phenomenon.
About the speaker:
Lexing Ying is a professor of mathematics at Stanford University. He received B.S. from Shanghai Jiaotong University in 1998 and Ph.D. from New York University in 2004. Before joining Stanford in 2012, he was a post-doc at Caltech and a professor at UT Austin. He received a Sloan Fellowship in 2007, an NSF Career Award in 2009, the Fengkang Prize in 2011, and the James H. Wilkinson Prize in 2013. He is an invited speaker of ICM 2022.