Derivative-Free Sequential Quadratic Programming for Equality-Constrained Stochastic Optimization

πŸ“… 2025-10-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses nonlinear stochastic optimization with deterministic equality constraints under zeroth-order information, where the objective function is corrupted by sampling noise. We propose a gradient- and Hessian-free stochastic sequential quadratic programming (SQP) algorithm: zeroth-order gradient and Hessian estimates are constructed via simultaneous perturbation stochastic approximation (SPSA); a momentum-based online debiasing mechanism coupled with moving averaging is introduced to effectively mitigate estimation bias; and each iteration requires only a constant number of function evaluations. Under standard regularity assumptions, we establish global almost-sure convergence and local asymptotic normality of the estimator, enabling online statistical inference for the optimal parameters. Numerical experiments on benchmark nonlinearly constrained problems demonstrate the algorithm’s efficiency and robustness.

Technology Category

Application Category

πŸ“ Abstract
We consider solving nonlinear optimization problems with a stochastic objective and deterministic equality constraints, assuming that only zero-order information is available for both the objective and constraints, and that the objective is also subject to random sampling noise. Under this setting, we propose a Derivative-Free Stochastic Sequential Quadratic Programming (DF-SSQP) method. Due to the lack of derivative information, we adopt a simultaneous perturbation stochastic approximation (SPSA) technique to randomly estimate the gradients and Hessians of both the objective and constraints. This approach requires only a dimension-independent number of zero-order evaluations -- as few as eight -- at each iteration step. A key distinction between our derivative-free and existing derivative-based SSQP methods lies in the intricate random bias introduced into the gradient and Hessian estimates of the objective and constraints, brought by stochastic zero-order approximations. To address this issue, we introduce an online debiasing technique based on momentum-style estimators that properly aggregate past gradient and Hessian estimates to reduce stochastic noise, while avoiding excessive memory costs via a moving averaging scheme. Under standard assumptions, we establish the global almost-sure convergence of the proposed DF-SSQP method. Notably, we further complement the global analysis with local convergence guarantees by demonstrating that the rescaled iterates exhibit asymptotic normality, with a limiting covariance matrix resembling the minimax optimal covariance achieved by derivative-based methods, albeit larger due to the absence of derivative information. Our local analysis enables online statistical inference of model parameters leveraging DF-SSQP. Numerical experiments on benchmark nonlinear problems demonstrate both the global and local behavior of DF-SSQP.
Problem

Research questions and friction points this paper is trying to address.

Solving stochastic optimization with equality constraints using derivative-free methods
Estimating gradients and Hessians via zero-order evaluations with random noise
Achieving global convergence and local asymptotic normality without derivatives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Derivative-free stochastic sequential quadratic programming method
Simultaneous perturbation stochastic approximation for gradient estimation
Online debiasing technique using momentum-style estimators
πŸ”Ž Similar Papers
No similar papers found.