【创源大讲堂】An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization

来源: 发布日期:2023-10-23浏览次数: 返回列表

讲座时间:2023年10月27日14:00-15:00

讲座地点:腾讯会议号:841 951 674;密码:1027

主讲人简介:

潘少华,华南理工大学数学学院教授、博士生导师。现任中国运筹学会理事和中国运筹学会数学规划分会常务理事。研究方向:锥约束优化及互补问题、低秩稀疏优化、结构非凸非光滑优化问题的理论与算法研究;主持多项国家自科基金和广东省自科基金;在国内外重要刊物如Mathematical Programming, SIAM Journal on Optimization, SIAM Journal on Control and Optimization, SIAM Journal on Scientific Computing, IMA Journal on Numerical Analysis等优化及数值计算杂志发表论文50余篇。2019年荣获广东省自然科学奖二等奖。

讲座内容简介:

Title:An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization

Abstract:This talk focuses on the minimization of a sum of a twice continuously differentiable function and a nonsmooth convex function. An inexact regularized proximal Newton method is proposed by an approximation to the Hessian of involving the ρth power of the KKT residual. For ρ=0, we justify the global convergence of the iterate sequence for the KL objective function and its R-linear convergence rate for the KL objective function of exponent 1/2. For ρ∈(0,1), by assuming that cluster points satisfy a locally Holderian error bound of order q on a nice stationary point set and a local error bound of order q>1+ρ on the common stationary point set, respectively, we establish the global convergence of the iterate sequence and its superlinear convergence rate with order depending on q and ρ. A dual semismooth Newton augmented Lagrangian method is also developed for seeking an inexact minimizer of subproblems. Numerical comparisons with two state-of-the-art methods on l_1-regularized Student's t-regressions, group penalized Student's t-regressions, and nonconvex image restoration confirm the efficiency of the proposed method.


作者:王承竞   编辑:蔡京君