Class of '27 Lecture I-"Gradient Decent Without Gradients"

Abstract: The core of continuous optimization lies in using information from first and second order derivatives to produce steps that improve objective function value. Classical methods such as gradient decent and Newton method rely on this information. The recently popular method in machine learning - Stochastic Gradient Decent - does not require the gradient itself, but still requires its unbiased estimate. However, in many applications either derivatives or their unbiased estimates are not available. We will thus discuss a variety of methods which construct useful gradient approximations, both deterministic and stochastic, from only function values...
Date
Location
Amos Eaton 214
Speaker: Katya Scheinberg from Lehigh University
Back to top