🤖 AI Summary
Formal verification of neural network robustness against ℓ₀ (few-pixel) attacks is challenging due to the non-convexity of the perturbation space. Method: This work establishes, for the first time, that the convex hull of an ℓ₀ ball is the intersection of an axis-aligned bounding box and an asymmetric scaled ℓ₁-like polytope; leveraging this geometric characterization, we propose a tight linear bound propagation algorithm that jointly models ℓ₀ and ℓ₁-like perturbations. Contribution/Results: Our method achieves significantly improved bound tightness by exploiting the precise convex hull structure. On the most challenging ℓ₀ verification benchmarks, it attains 1.24–7.07× speedup in verification time over state-of-the-art approaches (geometric mean: 3.16×), while preserving scalability and theoretical soundness. This provides a novel, principled framework for formal verification under non-convex perturbations.
📝 Abstract
Few-pixel attacks mislead a classifier by modifying a few pixels of an image. Their perturbation space is an $ell_0$-ball, which is not convex, unlike $ell_p$-balls for $pgeq1$. However, existing local robustness verifiers typically scale by relying on linear bound propagation, which captures convex perturbation spaces. We show that the convex hull of an $ell_0$-ball is the intersection of its bounding box and an asymmetrically scaled $ell_1$-like polytope. The volumes of the convex hull and this polytope are nearly equal as the input dimension increases. We then show a linear bound propagation that precisely computes bounds over the convex hull and is significantly tighter than bound propagations over the bounding box or our $ell_1$-like polytope. This bound propagation scales the state-of-the-art $ell_0$ verifier on its most challenging robustness benchmarks by 1.24x-7.07x, with a geometric mean of 3.16.