Kolmogorov Superposition Theorem can Break the Curse of Dimensionality

Abstract: We first review the problem of the curse of dimensionality when approximating multi-dimensional functions. Several approximation results from Barron, Petrushev,  Bach, and etc . will be explained. Then we present  a deterministic approach using the Kolmogorov superposition theorem.   Mainly I will show that the rate of convergence  by using the Kolmogorov superposition theorem is O(d^2/n) for a class of functions which is K-Lipschitz continuous, where d is the dimensionality and n is the number of neurons. Moreover, I will establish the rate of convergence of  a neural network computation based on the two layers for any continuous function. In addition, I will introduce the neural network approximation based on higher order ReLU functions to explain the powerful approximation of multivariate functions using  deep learning algorithms with  multiple layers. Finally I will use our theory to explain why the deep learning algorithm works for image classification.  
Date
Location
Amos Eaton 215
Speaker: Prof. Ming-Jun Lai from University of Georgia
Back to top