🤖 AI Summary
This work addresses stochastic and finite-sum convex optimization problems subject to deterministic constraints. Conventional approaches seek ε-expected feasible solutions, where constraint violations are controlled only in expectation—failing to guarantee “almost-sure feasibility,” a critical requirement in practice. To bridge this gap, we introduce the notion of an ε-certifiably feasible stochastic optimal solution: a solution whose constraint violation is bounded by ε with probability one, while its expected optimality gap is at most ε. Methodologically, we embed deterministic constraints into a quadratic penalty framework and propose a single-loop algorithm combining accelerated stochastic gradient (ASG) updates with an enhanced variance-reduction scheme to efficiently solve the penalized subproblems. Theoretically, we establish the first first-order oracle complexity bound for computing ε-certifiably feasible solutions and derive the corresponding complexity for sample-average approximation, achieving significant improvements in both constraint reliability and computational efficiency.
📝 Abstract
In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an $ε$-$expectedly feasible stochastic optimal$ solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance $ε$. However, in many practical applications, constraints must be nearly satisfied with certainty, rendering such solutions potentially unsuitable due to the risk of substantial violations. To address this issue, we propose stochastic first-order methods for finding an $ε$-$surely feasible stochastic optimal$ ($ε$-SFSO) solution, where the constraint violation is deterministically bounded by $ε$ and the expected optimality gap is at most $ε$. Our methods apply an accelerated stochastic gradient (ASG) scheme or a modified variance-reduced ASG scheme $only once$ to a sequence of quadratic penalty subproblems with appropriately chosen penalty parameters. We establish first-order oracle complexity bounds for the proposed methods in computing an $ε$-SFSO solution. As a byproduct, we also derive first-order oracle complexity results for sample average approximation method in computing an $ε$-SFSO solution of the stochastic optimization problem using our proposed methods to solve the sample average problem.