🤖 AI Summary
Satellite AI-based fault detection systems demand ultra-high reliability, necessitating rigorous verification of neural network robustness against input uncertainties encountered in space environments. Method: This paper proposes a formal quantification framework for local robustness verification, pioneering the application of the Marabou verifier to aerospace fault detection models. It constructs precise input constraints—modeling sensor noise and communication distortions—and formal output specifications to enable end-to-end, mathematically provable guarantees on classifier stability within bounded perturbations. Contribution/Results: The approach delivers certified robustness against realistic on-orbit anomalies, significantly enhancing fault detector resilience. It represents the first formal verification effort for spaceborne AI systems, establishing a reproducible, verifiable methodology for trustworthy onboard intelligent diagnostics—thereby bridging a critical gap in the formal assurance of satellite AI.
📝 Abstract
Failures in satellite components are costly and challenging to address, often requiring significant human and material resources. Embedding a hybrid AI-based system for fault detection directly in the satellite can greatly reduce this burden by allowing earlier detection. However, such systems must operate with extremely high reliability. To ensure this level of dependability, we employ the formal verification tool Marabou to verify the local robustness of the neural network models used in the AI-based algorithm. This tool allows us to quantify how much a model's input can be perturbed before its output behavior becomes unstable, thereby improving trustworthiness with respect to its performance under uncertainty.