Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints

We give a damped proximal augmented Lagrangian method (DPALM) for solving problems with a weakly-convex objective and convex linear/non-linear constraints. Instead of taking a full stepsize, DPALM adopts a damped dual stepsize to ensure the boundedness of dual iterates. We show that DPALM can produce a (near) ε-KKT point within O(ε−2) outer itera- tions if each DPALM subproblem is solved to a proper accuracy. In addition, we establish overall iteration complexity of DPALM when the objective is ei- ther a regularized smooth function or in a regularized compositional form. For the former case, DPALM achieves the complexity of eO ε−2.5 pro- duce an ε-KKT point by applying an accelerated progitximal gradient (APG) method to each DPALM subproblem. For the latter case, the complexity of DPALM is eO ε−3 to produce a near ε-KKT point by using an APG to solve a Moreau-envelope smoothed version of each subproblem. Our outer it- eration complexity and the overall complexity either generalize existing best ones from unconstrained or linear-constrained problems to convex-constrained ones, or improve over the best-known results on solving the same-structured problems. Furthermore, numerical experiments on linearly/quadratically con- strained non-convex quadratic programs and linear-constrained robust nonlin- ear least squares are conducted to demonstrate the empirical efficiency of the proposed DPALM over several state-of-the art methods.

Date
Location
Amos Eaton 215
Speaker: Hari Dahal, PhD Candidate from RPI Math Department
Back to top