Mathematical Sciences Colloquium

The Mathematical Sciences Colloquium invites speakers from all areas of mathematics and are open to all members of the RPI community.

A static quantum embedding scheme based on coupled cluster theory

Avijit Shee from University of California - Berkeley

We develop a static quantum embedding scheme that utilizes different levels of approximations to coupled cluster (CC) theory for an active fragment region and its environment. In this approach, we solve the local fragment problem using a high-level CC method and address the environment problem with a lower-level Møller–Plesset (MP) perturbative method combined with an efficient relaxation mechanism. We define a static renormalized interaction for the fragment problem with the quantities obtained from the low-level method.

Structured Matrix Approximation from Matrix-Vector Products

Christopher Musco from New York University

I will discuss the problem of approximating a target matrix A with a structured matrix, given access to a limited number of adaptively chosen matrix-vector products with A. This general problem arises across computational science, both in algorithmic applications and, more recently, in Scientific Machine Learning (SciML), where it abstracts the central task of operator learning.

Wave Turbulence: Why and when it works and open challenges

Alan Newell from University of Arizona

Wave turbulence, the longtime statistical behavior of a sea of weakly nonlinear dispersive waves, unlike the case of hydrodynamic turbulence, has, in many circumstances, a natural asymptotic closure. The closure gives a closed (kinetic) equation for the energy (or number) density clearly revealing the resonance mechanism by which waves of one wavelength and direction transfer energy to others throughout the spectrum. Equations for the higher order cumulants are all linear and satisfied by a frequency renormalization.

Direct interpolative construction of quantized tensor trains

Michael Lindsey from UC Berkeley

Quantized tensor trains (QTTs) have recently emerged as a framework for the numerical discretization of continuous functions, with the potential for widespread applications in numerical analysis, including rank-structured solvers and preconditioners based on "quantum-inspired" algorithms such as DMRG. However, the theory of QTT approximation is not fully understood.

Simulation of molecules and materials from the first-principles of quantum mechanics

Eric Cancès from École des Ponts ParisTech

In a seminal article in 1929, P.A.M. Dirac wrote: "The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation."

Computational tools for the continuous spectrum of self-adjoint operators

Andrew Horning from Massachusetts Institute of Technology
Ever since Joseph Fourier used sines and cosines to diagonalize the Laplacian and solve the heat equation in 1822, spectral decompositions of linear operators have empowered scientists to disentangle complex physical phenomena. However, the spectrum of a self-adjoint operator can be more sophisticated than its familiar matrix counterpart; it may contain a continuum and the operator may not be diagonalized by eigenvectors alone. Until now, computational techniques for operators with continuous spectrum have typically focused on narrow classes with known analytical structure.

Randomized matrix decompositions for faster scientific computing

Robert Webber from Caltech - California Institute of Technology
Traditional numerical methods based on expensive matrix factorizations struggle with the scale of modern scientific applications. For example, kernel-based algorithms take a data set of size N, form the kernel matrix of size N x N, and then perform an eigendecomposition or inversion at a cost of O(N^3) operations. For data sets of size N >= 10^5, kernel learning is too expensive, straining the limits of personal workstations and even dedicated computing clusters. Randomized iterative methods have emerged as faster alternatives to the classical approaches.

Nonsmooth Nonconvex-Nonconcave Minimax Optimization: Algorithm Design and Convergence Analysis

Jiajin Li from Stanford University
Nonsmooth, nonconvex-nonconcave minimax optimization has gained widespread interest in recent years in machine learning and data science. However, much of the existing work focuses on a variant of the gradient descent-ascent (GDA) algorithm, which exhibits several limitations: (1) They are only applicable to smooth problems, where gradient information is available.  (2) They may not converge globally, and they may suffer from limit cycles.

Direct solvers for variable-coefficient scattering problems

Abinand Gopal from Yale University
Time-harmonic scattering in variable media can often be modeled by linear elliptic partial differential equations. Such equations pose several numerical challenges. For example, they can be intrinsically ill-conditioned, necessitate the imposition of radiation conditions, and produce pollution errors when discretized with standard finite difference or finite element methods.   To avoid these issues, it is often convenient to first reformulate the differential equation as an integral equation.

Reliable and Adaptive Stochastic Optimization in the Face of Messy Data

Miaolan Xie from Cornell University
Solving real-world stochastic optimization problems presents two key challenges: the messiness of real-world data, which can be noisy, biased, or corrupted due to factors like outliers, distribution shifts, and even adversarial attacks; and the laborious, time-intensive requirement of manually tuning step sizes in many existing algorithms. I will introduce a simple adaptive optimization framework that avoids the need for manual step size tuning by adaptively adjusting the step size in each iteration based on the progress of the algorithm.

Understanding the Computational Limits of Random Optimization Problems via Landscape Geometry

Eren C. Kizildag from Columbia University
Optimization problems involving uncertainty are central to various domains, including modern statistics and data science. Despite their ubiquity, finding efficient algorithms which provide a solution to these problems remains a major challenge. Interestingly, many random optimization problems share a common feature, dubbed as a statistical-computational gap: while the optimal value can be pinpointed non-constructively, known polynomial-time algorithms find strictly sub-optimal solutions and no better algorithm is known.

Sparse coding via a Delaunay triangulation

Dr. Abiy Tasissa from Tufts University
Abstract: Sparse coding is a technique of representing data as a sparse linear combination of a set of vectors. This representation facilitates computation and analysis of high-dimensional data that is prevalent in many applications. We study sparse coding in the setting where the set of vectors define a unique Delaunay triangulation. We propose a weighted l1 regularizer and show that it provably yields a sparse solution. Further, we show stability of sparse codes using the Cayley-Menger determinant.

Kolmogorov Superposition Theorem can Break the Curse of Dimensionality

Prof. Ming-Jun Lai from University of Georgia
Abstract: We first review the problem of the curse of dimensionality when approximating multi-dimensional functions. Several approximation results from Barron, Petrushev,  Bach, and etc . will be explained. Then we present  a deterministic approach using the Kolmogorov superposition theorem.   Mainly I will show that the rate of convergence  by using the Kolmogorov superposition theorem is O(d^2/n) for a class of functions which is K-Lipschitz continuous, where d is the dimensionality and n is the number of neurons.

Compressive Sensing Approaches to Reconstructing Neuronal Network Connectivity and Gaining Insights into Binocular Rivalry

Victor Barranca from Swarthmore University
Neuronal network connectivity demonstrates sparsity on multiple spatial scales and natural stimuli also possess sparse representations in numerous domains. In this talk, we underline the role of sparsity in the efficient encoding of network connectivity and inputs through nonlinear neuronal network dynamics.

The Shifted Boundary / Shifted Fracture Method for Computational Mechanics

Guglielmo Scovazzi from Duke University
Embedded/immersed/unfitted boundary methods obviate the need for continual re-meshing in many applications involving rapid prototyping and design. Unfortunately, many finite element embedded boundary methods (cutFEM, Finite Cell Method, etc.

Optimal structure-driven algorithm design in multi-agent machine learning

Tianyi Lin from UC Berkeley
Abstract: Multi-agent machine learning has seen tremendous achievements in recent years; yet, translation of single-agent optimization technique to multi-agent domain may not be straightforward. Two of the basic models for multi-agent machine learning -- minimax optimization problem and variational inequality problem -- are both computationally intractable in general. However, the gains from leveraging the special structures can be huge and understanding the optimal structure-driven algorithm is important from both theoretical and practical viewpoints.

Strongly correlated quantum chemistry: numerical analysis and recent applications

Fabian Faulstich from University of California Berkeley
Abstract: Strongly correlated quantum systems include some of the most challenging problems in science. I will present the first numerical analysis for the coupled cluster method tailored by matrix product states, which is a promising method for handling strongly correlated systems. I will then discuss recent applications of the coupled cluster method and matrix product states for solving the magic angle twisted bilayer graphene system at the level of interacting electrons.

Generalized Submodular Optimization—Theory, Algorithms, and Applications

Qimeng Yu from Northwestern University
Abstract:Submodularity is an important concept in integer and combinatorial optimization. Submodular set functions capture diminishing returns, and for this desirable property, they are found in numerous real-world applications. A submodular set function models the effect of selecting items from a single ground set, and whether an item is chosen can be modeled using a binary variable. However, in many problem contexts, the decisions consist of choosing multiple copies of homogenous items, or heterogenous items from multiple sets.

Transport Optimization Methods in Bayesian Sampling Problems

Wuchen Li from University of South Carolina
Abstract: We present a class of information metric optimization methods for high-dimensional Bayesian sampling problems. First, two examples of information geometries in probability spaces, such as the Fisher-Rao and the Wasserstein-2 spaces, are studied. Then, focusing on the Wasserstein-2 metric, we introduce accelerated gradient and Newton flows to design fast and efficient sampling algorithms. We also present practical sampling approximations for the proposed dynamics in high-dimensional sample spaces.

Stochastic Algorithms in the Large: Batch Size Saturation, Stepsize Criticality, Generalization Performance, and Exact Dynamics

Courtney Paquette from McGill University
Abstract: In this talk, I will present a framework for analyzing dynamics of stochastic optimization algorithms (e.g., stochastic gradient descent (SGD) and momentum (SGD+M)) when both the number of samples and dimensions are large. For the analysis, I will introduce a stochastic differential equation, called homogenized SGD.

Mathematics of novel materials from atomic to macroscopic scales

Alexander Watson from University of Minnesota
Abstract: Materials' electronic properties arise from the complex dynamics of electrons flowing through the material. These dynamics are quantum mechanical and realize many surprising phenomena without classical analogues.

Optimization for statistical learning with low dimensional structure: regularity and conditioning

Lijun Ding from University of Wisconsin
Abstract: Many statistical machine learning problems, where one aims to recover an underlying low-dimensional signal, are based on optimization. Existing work often either overlooked the computational complexity in solving the optimization problem, or requires case-specific algorithm and analysis -- especially for nonconvex problems. This talk addresses the above two issues from a unified perspective of conditioning.

A mean-field opinion model on hypergraphs: from modeling to inference

Weiqi Chu from UCLA
Abstract:  The perspectives and opinions of people change and spread through social interactions on a daily basis. In the study of opinion dynamics on social networks, one often models social entities (such as twitter accounts) as nodes and their relationships (such as followship) as edges, and examines how opinions evolve as dynamical processes on networks, including graphs, hypergraphs, multi-layer networks, etc. In the first part of my talk, I will introduce a model of opinion dynamics and derive its mean-field limit as the total number of agents goes to infinity.

Data-driven Approaches and Operator Learning for Solving PDE-related Problems

Zecheng Zhang from Carnegie Mellon University
Abstract: Solving partial differential equations (PDEs) and PDE-based model reduction are challenging problems, particularly when PDEs have multiscale features. The data-driven approach has become an excellent option for some scientific computing problems. It becomes even more effective for some engineering applications with available data. There are various data-driven treatments for PDE-related problems. Many of them can be implemented in the operator learning framework as the underlying mathematical computation problems construct the operator. I will focus on and discuss operator learning.

First-Order Methods for Differentiable "Nonsmooth" Convex Optimization: A Tale of Frank-Wolfe and Multiplicative-Gradient

Renbo Zhao from MIT
Abstract: The remarkable success of first-order methods on huge-scale problems has mostly been in the realm of problems where the objective function or its gradient satisfy a Lipschitz property. Here we consider important convex optimization problems where neither the objective nor its gradient are Lipschitz, and typically blows up on part of the (relative) boundary of the feasible region. Such problems appear in a wide range of applications across numerous fields, including statistical machine learning, medical imaging, experimental design, quantum physics, etc.

Benefits of Randomized First Order Algorithms for Min-Max Optimization

Ahmet Alacaoglu from University of Wisconsin
Abstract: Modern data science applications require solving high dimensional optimization problems with large number of data points. Min-max optimization provides a unified framework for many problems in this context ranging from empirical risk minimization in machine learning to medical imaging and nonlinear programming. This talk will present two approaches for using randomization to design simple, practical and adaptive optimization algorithms that improve the best-known complexity guarantees for convex-concave optimization.

Quantum algorithms for Hamiltonian simulation with unbounded operators

Di Fang from University of Berkeley
Abstract: Recent years have witnessed tremendous progress in developing and analyzing quantum computing algorithms for quantum dynamics simulation of bounded operators (Hamiltonian simulation). However, many scientific and engineering problems require the efficient treatment of unbounded operators, which frequently arise due to the discretization of differential operators. Such applications include molecular dynamics, electronic structure theory, quantum control and quantum machine learning.

A PDE Model to Study Natural Selection Across Multiple Levels of Organization

Daniel Cooney from University of Pennsylvania
Abstract: Natural selection in complex biological and social systems can simultaneously operate across multiple levels of organization, ranging from genes and cells to animal groups and complex human societies. Such scenarios typically present an evolutionary conflict between the incentive of individuals to cheat and the collective incentive to establish cooperation within a group.

Constrained Optimization Methods for Training Machine Learning Models with AUC-Based Fairness Constraints

Qihang Lin from University of Iowa
Abstract: Machine learning technologies have been increasingly used in high-stakes decision making systems,which raises a new challenge of avoiding unfair and discriminatory decisions for protected classes.Among the techniques for improving the fairness of AI systems, the optimization-based method, whichtrains a model through optimizing its prediction performance subject to fairness constraints, is mostpopular because of its intuitive idea and the Pareto efficiency it provides when trading off predictionperformance against fairness.
Back to top