Closing the Approximation Gap of Partial AUC Optimization: A Tale of Two Formulations

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Partial AUC (pAUC) optimization suffers from uncontrollable approximation error and poor scalability due to the NP-hardness of interval-based sample selection. Method: This paper proposes two instance-level minimax reconstruction approaches that replace exact interval selection with threshold learning, enabling efficient and unbiased pAUC optimization. The algorithms integrate smoothing techniques and efficient solvers, achieving linear iteration complexity and $O(varepsilon^{-1/3})$ convergence rates for both one-sided and two-sided pAUC optimization. Contribution/Results: We establish the first theoretical framework balancing negligible approximation error and estimation unbiasedness, derive tight generalization bounds, and quantify how TPR/FPR constraints affect performance. Extensive experiments on benchmark datasets demonstrate significant improvements in both accuracy and efficiency over state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
As a variant of the Area Under the ROC Curve (AUC), the partial AUC (PAUC) focuses on a specific range of false positive rate (FPR) and/or true positive rate (TPR) in the ROC curve. It is a pivotal evaluation metric in real-world scenarios with both class imbalance and decision constraints. However, selecting instances within these constrained intervals during its calculation is NP-hard, and thus typically requires approximation techniques for practical resolution. Despite the progress made in PAUC optimization over the last few years, most existing methods still suffer from uncontrollable approximation errors or a limited scalability when optimizing the approximate PAUC objectives. In this paper, we close the approximation gap of PAUC optimization by presenting two simple instance-wise minimax reformulations: one with an asymptotically vanishing gap, the other with the unbiasedness at the cost of more variables. Our key idea is to first establish an equivalent instance-wise problem to lower the time complexity, simplify the complicated sample selection procedure by threshold learning, and then apply different smoothing techniques. Equipped with an efficient solver, the resulting algorithms enjoy a linear per-iteration computational complexity w.r.t. the sample size and a convergence rate of $O(ε^{-1/3})$ for typical one-way and two-way PAUCs. Moreover, we provide a tight generalization bound of our minimax reformulations. The result explicitly demonstrates the impact of the TPR/FPR constraints $α$/$β$ on the generalization and exhibits a sharp order of $ ilde{O}(α^{-1} _+^{-1} + β^{-1} _-^{-1})$. Finally, extensive experiments on several benchmark datasets validate the strength of our proposed methods.
Problem

Research questions and friction points this paper is trying to address.

Optimizing partial AUC with controllable approximation errors.
Addressing scalability issues in partial AUC optimization methods.
Closing the approximation gap via minimax reformulations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two instance-wise minimax reformulations close approximation gap
Threshold learning simplifies sample selection procedure
Efficient solver ensures linear per-iteration computational complexity
🔎 Similar Papers
No similar papers found.