Session: Trust-Region Methods
Chair: Mohamed Laghdaf Habiboullah
Cluster: nan
Talk 1: Complexity of trust-region methods in the presence of unbounded Hessian approximations
Speaker: Mohamed Laghdaf Habiboullah
Abstract: We extend traditional complexity analyses of trust-region methods for unconstrained, possibly nonconvex, optimization. Whereas most complexity analyses assume uniform boundedness of the model Hessians, we work with potentially unbounded model Hessians. Boundedness is not guaranteed in practical implementations, in particular ones based on quasi-Newton updates such as PSB, BFGS and SR1. Our analysis is conducted for a family of trust-region methods that includes most known methods as special cases. We examine two regimes of Hessian growth: one bounded by a power of the number of successful iterations, and one bounded by a power of the number of iterations. This allows us to formalize and confirm the profound intuition of Powell [IMA J. Numer. Ana. 30(1):289-301,2010], who studied convergence under a special case of our assumptions, but whose proof contained complexity arguments. Specifically, for \(0 \leq p < 1\), we establish sharp \(O(\epsilon^{-2/(1-p)})\) evaluation complexity to find an \(\epsilon\)-stationary point when model Hessians are \(O(k^p)\), where \(k\) is the iteration counter. For \(p = 1\), which is the case studied by Powell, we establish a sharp \(O(\exp(c\epsilon^{-2}))\) evaluation complexity for a certain constant \(c > 0\). This is as Powell suspected and is far worse than other bounds surmised elsewhere in the literature. We establish similar bounds when model Hessians are \(O(|\mathcal{S}_k|^p)\), where \(|\mathcal{S}_k|\) is the number of iterations where the step was accepted, up to iteration \(k\). To the best of our knowledge, ours is the first work to provide complexity bounds when model Hessians grow linearly with \(|\mathcal{S}_k|\) or at most linearly with \(k\), which covers multiple quasi-Newton approximations. Link: https://arxiv.org/abs/2408.06243
Talk 2: Advancing Trust-Region Methods: Gradient-Based Radius Updates and Stochastic Optimization
Speaker: Fabian Bastin
Abstract: We consider the framework of nonlinear, unconstrained, smooth optimization, where the trust-region approach has been established as one of the most prominent methods over the last decades. However, significant investigation has been devoted to updating the trust-region radius in recent years, leading to various schemes that exploit the structure of the problem under consideration. In particular, a more intricate link between the trust-region radius and the norm of the gradient at the current iterate has enabled the derivation of upper bounds on the number of iterations required to obtain an ϵ-optimal solution while preserving the main traditional convergence properties. Such results have also been extended to the context of stochastic programming with adaptive sampling to ensure convergence guarantees, with the key ingredient being the ability to achieve a sufficient decrease with high probability. We revisit the progress made regarding the relationship between the trust-region radius update and the gradient norm to establish that, even under these schemes, the trust region ultimately becomes inactive, even when using an adaptive sampling approach in stochastic optimization. This ensures that, under mild conditions, minimizing the model inside the trust region can ultimately yield an approximate quasi-Newton direction. We illustrate these ideas using a trust-region method with adaptive sampling and truncated conjugate gradient on a set of benchmark problems.
Talk 3: Beyond Newton Interpolation: p-Factor Polynomials for Function Approximation in Optimization
Speaker: Olga Brezhneva
Abstract: We introduce a new interpolation method for nonlinear functions over an interval, focusing on cases where the approximated function is non-regular at a solution. In such cases, classical interpolation methods, such as Newton’s interpolation polynomial, do not necessarily provide the required accuracy for approximating solutions to the equation $f(x)=0$. In contrast, our method, based on p-factor interpolation polynomials, ensures the necessary accuracy, leading to improved function approximations in solving nonlinear equations. The presented results are based on the theory of p-regularity. Our approach provides new insights into function approximation techniques in nonlinear optimization. Specifically, it has potential applications in derivative-free optimization, where function evaluations are costly or noisy, as well as in trust-region methods and polynomial-based approaches for solving nonlinear problems.