π€ AI Summary
This paper addresses continuous nonlinear equality-constrained optimization problems where constraints are defined by expectations (or empirical averages) over a large number of terms. To overcome the high sample complexity and computational cost of conventional full-sample methods, we propose the first progressive sampling strategy for such problems: starting from a small random sample, the sample set is incrementally expanded across iterations, and a sequential optimization framework is designed leveraging first- and second-order derivative information of the constraint functions. Under standard regularity assumptions, we establish rigorous theoretical guarantees showing that our method strictly improves upon the worst-case sample complexity bound of the full-sample baseline. Numerical experiments on canonical test problems confirm the methodβs efficiency and practical feasibility.
π Abstract
An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which the sample set grows progressively. Under assumptions about the constraint functions and their first- and second-order derivatives that are reasonable in some real-world settings of interest, it is shown that -- with a sufficiently large initial sample -- solving a sequence of problems defined through progressive sampling yields a better worst-case sample complexity bound compared to solving a single problem with a full set of samples. The results of numerical experiments with a set of test problems demonstrate that the proposed approach can be effective in practice.