🤖 AI Summary
This work addresses the challenge of robustness evaluation for deep learning–based classification models applied to experimental data in particle and astroparticle physics. We propose MiniFool, a physics-aware adversarial attack algorithm that explicitly incorporates experimental measurement uncertainties as hard constraints in the attack formulation. MiniFool jointly optimizes a cost function combining the χ² test statistic and target-class score deviation, generating physically plausible adversarial examples via constrained minimization. Experiments on MNIST, IceCube, and CMS open datasets demonstrate that MiniFool quantifies model robustness across uncertainty scales, uncovers a strong correlation between classification flip probability and original prediction confidence, and enables label-free robustness assessment. Its core contribution lies in systematically embedding domain-specific priors—namely, physical consistency requirements and experimental error models—into the adversarial attack framework, thereby establishing a novel paradigm for trustworthiness verification of scientific AI models.
📝 Abstract
In this paper, we present a new algorithm, MiniFool, that implements physics-inspired adversarial attacks for testing neural network-based classification tasks in particle and astroparticle physics. While we initially developed the algorithm for the search for astrophysical tau neutrinos with the IceCube Neutrino Observatory, we apply it to further data from other science domains, thus demonstrating its general applicability. Here, we apply the algorithm to the well-known MNIST data set and furthermore, to Open Data data from the CMS experiment at the Large Hadron Collider. The algorithm is based on minimizing a cost function that combines a $χ^2$ based test-statistic with the deviation from the desired target score. The test statistic quantifies the probability of the perturbations applied to the data based on the experimental uncertainties. For our studied use cases, we find that the likelihood of a flipped classification differs for both the initially correctly and incorrectly classified events. When testing changes of the classifications as a function of an attack parameter that scales the experimental uncertainties, the robustness of the network decision can be quantified. Furthermore, this allows testing the robustness of the classification of unlabeled experimental data.