Adversarial Robustness of Time-Series Classification for Crystal Collimator Alignment

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of temporal classification models to realistic adversarial perturbations in crystal collimator alignment tasks by proposing a preprocessing-aware robustness framework. By constructing a differentiable preprocessing wrapper that integrates temporal normalization, padding constraints, and structured perturbations into a CNN pipeline—combined with sliding-window modeling—the framework enables, for the first time, sequence-level adversarial robustness analysis. It is compatible with both Foolbox and Adversarial Robustness Toolbox (ART) attack methods and supports adversarial fine-tuning. Experiments demonstrate that adversarial fine-tuning improves robust accuracy by up to 18.6% without compromising clean-sample accuracy and reveals persistent adversarial misclassifications across adjacent temporal windows.
📝 Abstract
In this paper, we analyze and improve the adversarial robustness of a convolutional neural network (CNN) that assists crystal-collimator alignment at CERN's Large Hadron Collider (LHC) by classifying a beam-loss monitor (BLM) time series during crystal rotation. We formalize a local robustness property for this classifier under an adversarial threat model based on real-world plausibility. Building on established parameterized input-transformation patterns used for transformation- and semantic-perturbation robustness, we instantiate a preprocessing-aware wrapper for our deployed time-series pipeline: we encode time-series normalization, padding constraints, and structured perturbations as a lightweight differentiable wrapper in front of the CNN, so that existing gradient-based robustness frameworks can operate on the deployed pipeline. For formal verification, data-dependent preprocessing such as per-window z-normalization introduces nonlinear operators that require verifier-specific abstractions. We therefore focus on attack-based robustness estimates and pipeline-checked validity by benchmarking robustness with the frameworks Foolbox and ART. Adversarial fine-tuning of the resulting CNN improves robust accuracy by up to 18.6 % without degrading clean accuracy. Finally, we extend robustness on time-series data beyond single windows to sequence-level robustness for sliding-window classification, introduce adversarial sequences as counterexamples to a temporal robustness requirement over full scans, and observe attack-induced misclassifications that persist across adjacent windows.
Problem

Research questions and friction points this paper is trying to address.

adversarial robustness
time-series classification
crystal collimator alignment
beam-loss monitor
sliding-window classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial robustness
time-series classification
preprocessing-aware wrapper
sequence-level robustness
adversarial fine-tuning
🔎 Similar Papers
No similar papers found.
X
Xaver Fink
CERN, Geneva, Switzerland
B
Borja Fernandez Adiego
CERN, Geneva, Switzerland
D
Daniele Mirarchi
CERN, Geneva, Switzerland
E
Eloise Matheson
CERN, Geneva, Switzerland
A
Alvaro Garcia Gonzales
CERN, Geneva, Switzerland
G
Gianmarco Ricci
DESY, Hamburg, Germany
Joost-Pieter Katoen
Joost-Pieter Katoen
Distinguished Professor of Computer Science, RWTH Aachen University and University of Twente
formal methodsmodel checkingconcurrency theoryprobabilistic programmingprogram verification