Subgradient Selection Convergence Implies Uniform Subdifferential Set Convergence: And Other Tight Convergences Rates in Stochastic Convex Composite Minimization

📅 2024-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the uniform Hausdorff convergence of empirical subdifferential sets to the population subdifferential set in nonsmooth, nonconvex stochastic optimization, aiming to theoretically justify the approximation of population stationary points by empirical ones. We develop an analytical framework that dispenses with assumptions of continuous differentiability of the objective or strong regularity of the data distribution. Crucially, we establish—for the first time—a rigorous equivalence between uniform convergence of arbitrarily selected subgradients and uniform Hausdorff convergence of the subdifferential sets themselves. Leveraging tools from nonsmooth analysis, sample-average approximation (SAA) theory, and Hausdorff metric geometry, we derive finite-sample convergence rates for subdifferential sets under stochastic convex-composite objectives. Our bounds are tight and hold without smoothness or strong convexity assumptions, thereby advancing theoretical guarantees on the stability of nonsmooth risk landscapes and the reliability of empirical stationary points in finite-sample regimes.

Technology Category

Application Category

📝 Abstract
In nonsmooth, nonconvex stochastic optimization, understanding the uniform convergence of subdifferential mappings is crucial for analyzing stationary points of sample average approximations of risk as they approach the population risk. Yet, characterizing this convergence remains a fundamental challenge. This work introduces a novel perspective by connecting the uniform convergence of subdifferential mappings to that of subgradient mappings as empirical risk converges to the population risk. We prove that, for stochastic weakly-convex objectives, and within any open set, a uniform bound on the convergence of subgradients -- chosen arbitrarily from the corresponding subdifferential sets -- translates to a uniform bound on the convergence of the subdifferential sets themselves, measured by the Hausdorff metric. Using this technique, we derive uniform convergence rates for subdifferential sets of stochastic convex-composite objectives. Our results do not rely on key distributional assumptions in the literature, such as the continuous differentiability of the population objective, yet still provide tight convergence rates. These guarantees lead to new insights into the nonsmooth landscapes of such objectives within finite samples.
Problem

Research questions and friction points this paper is trying to address.

Uniform convergence of subdifferentials in stochastic optimization
Approximation of population stationary points by empirical ones
Convergence rates for nonsmooth convex-composite optimization problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reduction principle for subdifferential convergence
Sharp uniform convergence rates without differentiability
Analysis of nonsmooth stochastic optimization landscapes
🔎 Similar Papers
No similar papers found.