Structured Matrix Approximation from Matrix-Vector Products

I will discuss the problem of approximating a target matrix A with a structured matrix, given access to a limited number of adaptively chosen matrix-vector products with A. This general problem arises across computational science, both in algorithmic applications and, more recently, in Scientific Machine Learning (SciML), where it abstracts the central task of operator learning.

For common structures, like low-rank matrices, the number of matrix-vector products required to obtain a near-optimal approximation to A is well understood. However, for a number of other important structures, like sparse matrices or hierarchically low-rank matrices, results have been more elusive. I will present a recent algorithm that obtains nearly-optimal complexity for approximation by hierarchically low-rank matrices, which are especially important to applications in SciML. Our result leverages a number of new techniques, including robust versions of the randomized singular value decomposition (RandSVD), which are combined with the celebrated “peeling algorithms” of Lin, Lu, and Ying.

This talk presents joint work with Noah Amsel, Tyler Chen, Feyza Duman Keles, Diana Halikias, Cameron Musco, and David Persson. Our paper is available at https://arxiv.org/abs/2407.04686.

 

About the speaker

Christopher Musco is an Assistant Professor at New York University. His research lies at the intersection of theoretical computer science, computational mathematics, and data science, with a focus on developing fast randomized algorithms for linear algebraic problems and beyond. 

 

Date
Location
Amos Eaton 216
Speaker: Christopher Musco from New York University
Back to top