Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks

📅 2025-06-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Traditional conformal prediction (CP) fails under adversarial attacks, while existing robust CP methods suffer from excessively large prediction sets or high computational overhead on large-scale tasks. To address this, we propose Lip-RCP—the first efficient robust prediction framework that deeply integrates 1-Lipschitz robust neural networks with CP. Methodologically, we impose Lipschitz constraints to ensure output stability and derive, for the first time, a theoretical worst-case coverage bound for standard CP under arbitrary attack magnitudes. Experiments on medium- and large-scale benchmarks (e.g., ImageNet) show that Lip-RCP reduces robust prediction set size by up to 42% over state-of-the-art methods while accelerating inference by 3.8×. Crucially, it strictly guarantees both nominal coverage ≥90% and finite-sample robust coverage—without compromising statistical validity.

Technology Category

Application Category

📝 Abstract
Conformal Prediction (CP) has proven to be an effective post-hoc method for improving the trustworthiness of neural networks by providing prediction sets with finite-sample guarantees. However, under adversarial attacks, classical conformal guarantees do not hold anymore: this problem is addressed in the field of Robust Conformal Prediction. Several methods have been proposed to provide robust CP sets with guarantees under adversarial perturbations, but, for large scale problems, these sets are either too large or the methods are too computationally demanding to be deployed in real life scenarios. In this work, we propose a new method that leverages Lipschitz-bounded networks to precisely and efficiently estimate robust CP sets. When combined with a 1-Lipschitz robust network, we demonstrate that our lip-rcp method outperforms state-of-the-art results in both the size of the robust CP sets and computational efficiency in medium and large-scale scenarios such as ImageNet. Taking a different angle, we also study vanilla CP under attack, and derive new worst-case coverage bounds of vanilla CP sets, which are valid simultaneously for all adversarial attack levels. Our lip-rcp method makes this second approach as efficient as vanilla CP while also allowing robustness guarantees.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness of conformal prediction under adversarial attacks
Reducing computational demands for large-scale robust CP sets
Improving prediction set size and efficiency in CP methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lipschitz-bounded networks for robust CP
Efficient robust CP sets estimation
Worst-case coverage bounds for vanilla CP
🔎 Similar Papers
No similar papers found.