Stability-based Generalization Analysis of Randomized Coordinate Descent for Pairwise Learning

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of generalization theory for Randomized Coordinate Descent (RCD) in pairwise learning tasks (e.g., ranking, metric learning). Methodologically, it introduces parameter stability analysis—previously unexplored in pairwise learning—for RCD, integrates early stopping to balance optimization and estimation errors, and derives expected generalization upper bounds for both convex and strongly convex objectives. The theoretical contributions are twofold: (i) it establishes tight generalization bounds under standard pairwise loss functions; and (ii) under a low-noise condition, it achieves the optimal $O(1/n)$ convergence rate for excess risk. To the best of our knowledge, this is the first systematic theoretical guarantee for RCD in pairwise learning and the first stability-based generalization analysis in this setting, thereby filling a critical gap in the literature.

Technology Category

Application Category

📝 Abstract
Pairwise learning includes various machine learning tasks, with ranking and metric learning serving as the primary representatives. While randomized coordinate descent (RCD) is popular in various learning problems, there is much less theoretical analysis on the generalization behavior of models trained by RCD, especially under the pairwise learning framework. In this paper, we consider the generalization of RCD for pairwise learning. We measure the on-average argument stability for both convex and strongly convex objective functions, based on which we develop generalization bounds in expectation. The early-stopping strategy is adopted to quantify the balance between estimation and optimization. Our analysis further incorporates the low-noise setting into the excess risk bound to achieve the optimistic bound as $O(1/n)$, where $n$ is the sample size.
Problem

Research questions and friction points this paper is trying to address.

Analyzes generalization of randomized coordinate descent in pairwise learning.
Develops generalization bounds for convex and strongly convex objectives.
Incorporates low-noise setting to achieve optimistic risk bound.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes RCD generalization in pairwise learning
Uses on-average argument stability for bounds
Incorporates low-noise for optimistic risk bound
🔎 Similar Papers
No similar papers found.
L
Liang Wu
Center of Statistical Research, School of Statistics, Southwestern University of Finance and Economics, Chengdu, China; Big Data Laboratory on Financial Security and Behavior, SWUFE (Laboratory of Philosophy and Social Sciences, Ministry of Education), Chengdu, China
R
Ruixi Hu
Center of Statistical Research, School of Statistics, Southwestern University of Finance and Economics, Chengdu, China
Yunwen Lei
Yunwen Lei
The University of Hong Kong
Statistical Learning TheoryStochastic OptimizationMachine Learning