Finalists (in alphabetical order):
Guy Kornowski for the paper "Oracle Complexity in Nonsmooth Nonconvex Optimization”, co-authored with Ohad Shamir.
Abstract: It is well-known that given a smooth, bounded-from-below, and possibly nonconvex function, standard gradient-based methods can find $\epsilon$-stationary points (with gradient norm less than $\epsilon$) in $\mathcal{O}(1/\epsilon^2)$ iterations. However, many important nonconvex optimization problems, such as those associated with training modern neural networks, are inherently not smooth, making these results inapplicable. In this paper, we study nonsmooth nonconvex optimization from an oracle complexity viewpoint, where the algorithm is assumed to be given access only to local information about the function at various points. We provide two main results: First, we consider the problem of getting \emph{near} $\epsilon$-stationary points. This is perhaps the most natural relaxation of \emph{finding} $\epsilon$-stationary points, which is impossible in the nonsmooth nonconvex case. We prove that this relaxed goal cannot be achieved efficiently, for any distance and $\epsilon$ smaller than some constants. Our second result deals with the possibility of tackling nonsmooth nonconvex optimization by reduction to smooth optimization: Namely, applying smooth optimization methods on a smooth approximation of the objective function. For this approach, we prove under a mild assumption an inherent trade-off between oracle complexity and smoothness: On the one hand, smoothing a nonsmooth nonconvex function can be done very efficiently (e.g., by randomized smoothing), but with dimension-dependent factors in the smoothness parameter, which can strongly affect iteration complexity when plugging into standard smooth optimization methods. On the other hand, these dimension factors can be eliminated with suitable smoothing methods, but only by making the oracle complexity of the smoothing process exponentially large.
Naoki Marumo for the paper “Parameter-free Accelerated Gradient Descent for nonconvex optimization", co-authored with Akiko Takeda.
Abstract: We propose a new first-order method for minimizing nonconvex functions with a Lipschitz continuous gradient and Hessian. The proposed method is an accelerated gradient descent with two restart mechanisms and finds a solution where the gradient norm is less than $\epsilon$ in $O(\epsilon^{-7/4})$ function and gradient evaluations. Unlike existing first-order methods with similar complexity bounds, our algorithm is parameter-free because it requires no prior knowledge of problem-dependent parameters, e.g., the Lipschitz constants and the target accuracy $\epsilon$. The main challenge in achieving this advantage is estimating the Lipschitz constant of the Hessian using only first-order information. To this end, we develop a new Hessian-free analysis based on two technical inequalities: a Jensen-type inequality for gradients and an error bound for the trapezoidal rule. Several numerical results illustrate that the proposed method performs comparably to existing algorithms with similar complexity bounds, even without parameter tuning.
Lai Tian for the paper "Testing Approximate Stationarity Concepts for Piecewise Affine Functions", co-authored with Anthony Man-Cho So.
Abstract: We study the basic computational problem of detecting approximate stationary points for continuous piecewise affine (PA) functions. Our contributions span multiple aspects, including complexity, regularity, and algorithms. Specifically, we show that testing first-order approximate stationarity concepts, as defined by commonly used generalized subdifferentials, is computationally intractable unless $\cP=\cNP$. To facilitate computability, we consider a polynomial-time solvable relaxation by abusing the convex subdifferential sum rule and establish a tight characterization of its exactness. Furthermore, addressing an open issue motivated by the need to terminate the subgradient method in finite time, we introduce the first oracle-polynomial-time algorithm to detect so-called near-approximate stationary points for PA functions. A notable byproduct of our development in regularity is the first necessary and sufficient condition for the validity of an equality-type (Clarke) subdifferential sum rule. Our techniques revolve around two new geometric notions for convex polytopes and may be of independent interest in nonsmooth analysis. Moreover, some corollaries of our work on complexity and algorithms for stationarity testing address open questions in the literature. To demonstrate the versatility of our results, we complement our findings with applications to a series of structured piecewise smooth functions, including $\rho$-margin-loss SVM, piecewise affine regression, and nonsmooth neural networks.