Session: Numerical Methods
Chair: Masaru Ito
Cluster: nan
Talk 1: Proximal gradient-type method with generalized distances for nonconvex composite optimization
Speaker: Masaru Ito
Abstract: In this talk, we consider a composite optimizaton problem minimizing the sum of two functions f and g. Typical proximal gradient methods rely on descent lemma and the convexity of g, for which the choice of distance-like functions to define the proximal subproblems is constrained by the structure of both f and g. We propose a proximal gradient-type method when f has locally Lipschitz gradient and g is nonconvex. We discuss conditions of distance-like functions allowing their broader choices and ensuring convergence results.
Talk 2: A fixed-point algorithm with matrix splitting for nonlinear second-order cone complementarity problems
Speaker: Shunsuke Hayashi
Abstract: The Second-Order Cone Complementarity Problem (SOCCP) is a wide class of problems containing the nonlinear complementarity problem (NCP) and the second-order cone programming problem (SOCP). Recently, Li et al. reformulated the linear SOCCP into a fixed-point problem by using matrix splitting, and constructed the Anderson-accelerated preconditioned modulus approach. In this study, we extend their approach to nonlinear SOCCPs. To solve such problems, we combine a matrix splitting with a fixed-point algorithm. We also present an approach with Anderson acceleration to enhance the convergence performance. We further show the convergence property under appropriate assumptions. Finally, we report some numerical results to evaluate the effectiveness of the algorithm.
Talk 3: A Simple yet Highly Accurate Prediction-Correction Algorithm for Time-Varying Optimization
Speaker: Tomoya Kamijima
Abstract: Time-varying optimization problems arise in various applications such as robotics, signal processing, and electronics. We propose SHARP, a simple yet highly accurate prediction-correction algorithm for unconstrained time-varying problems. The prediction step is based on Lagrange interpolation of past solutions, allowing for low computational cost without requiring Hessian matrices or gradients. To enhance stability, especially in non-convex settings, an acceptance condition is introduced to reject excessively large updates. We provide theoretical guarantees for a small tracking error and demonstrate the superior performance of SHARP through numerical experiments.