Formal Verification of Local Robustness of a Classification Algorithm for a Spatial Use Case

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Satellite AI-based fault detection systems demand ultra-high reliability, necessitating rigorous verification of neural network robustness against input uncertainties encountered in space environments. Method: This paper proposes a formal quantification framework for local robustness verification, pioneering the application of the Marabou verifier to aerospace fault detection models. It constructs precise input constraints—modeling sensor noise and communication distortions—and formal output specifications to enable end-to-end, mathematically provable guarantees on classifier stability within bounded perturbations. Contribution/Results: The approach delivers certified robustness against realistic on-orbit anomalies, significantly enhancing fault detector resilience. It represents the first formal verification effort for spaceborne AI systems, establishing a reproducible, verifiable methodology for trustworthy onboard intelligent diagnostics—thereby bridging a critical gap in the formal assurance of satellite AI.

Technology Category

Application Category

📝 Abstract
Failures in satellite components are costly and challenging to address, often requiring significant human and material resources. Embedding a hybrid AI-based system for fault detection directly in the satellite can greatly reduce this burden by allowing earlier detection. However, such systems must operate with extremely high reliability. To ensure this level of dependability, we employ the formal verification tool Marabou to verify the local robustness of the neural network models used in the AI-based algorithm. This tool allows us to quantify how much a model's input can be perturbed before its output behavior becomes unstable, thereby improving trustworthiness with respect to its performance under uncertainty.
Problem

Research questions and friction points this paper is trying to address.

Verifying local robustness of satellite fault detection neural networks
Ensuring high reliability for AI-based systems in space
Quantifying input perturbation tolerance before output instability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formal verification tool Marabou for neural networks
Quantifying input perturbation before output instability
Verifying local robustness in AI classification algorithms
🔎 Similar Papers
No similar papers found.
D
Delphine Longuet
Thales, TRT, cortAIx Labs, France
A
Amira Elouazzani
Thales, TRT, cortAIx Labs, France
A
Alejandro Penacho Riveiros
KTH Royal Institute of Technology, Stockholm, Sweden
Nicola Bastianello
Nicola Bastianello
KTH Royal Institute of Technology
distributed optimizationfederated learningonline optimizationdistributed learning