🤖 AI Summary
This work proposes a class of generalized Hessian estimators based on Random Direction Stochastic Approximation (RDSA) for zeroth-order optimization settings where only noisy function evaluations are available. By leveraging multi-point function measurements, the method constructs higher-order-accurate Newton-type update directions that significantly reduce estimation bias. Theoretically, the estimator is shown to be asymptotically unbiased, and the associated stochastic Newton algorithm is proven to converge, with both asymptotic and non-asymptotic analyses provided. These results extend the theoretical foundations of zeroth-order second-order optimization. Numerical experiments further demonstrate the effectiveness of the multi-point measurement strategy in enhancing optimization performance.
📝 Abstract
We present a family of generalized Hessian estimators of the objective using random direction stochastic approximation (RDSA) by utilizing only noisy function measurements. The form of each estimator and the order of the bias depend on the number of function measurements. In particular, we demonstrate that estimators with more function measurements exhibit lower-order estimation bias. We show the asymptotic unbiasedness of the estimators. We also perform asymptotic and non-asymptotic convergence analyses for stochastic Newton methods that incorporate our generalized Hessian estimators. Finally, we perform numerical experiments to validate our theoretical findings.