π€ AI Summary
This study addresses the limited interpretability of fault detection in autonomous spacecraft attitude and orbit control systems by proposing an explainable fault detection framework based on a convolutional autoencoder. The method innovatively extracts low-dimensional, semantically annotated βpeepholeβ codes from intermediate network activations, enabling interpretable representation, precise localization, and bias detection of anomalies in reaction wheel telemetry data with negligible additional computational overhead. Experimental results demonstrate that the generated anomaly indicators effectively support onboard fault detection, isolation, and recovery, confirming the feasibility and practicality of the approach in resource-constrained space environments.
π Abstract
The increasing autonomy of spacecraft demands fault-detection systems that are both reliable and explainable. This work addresses eXplainable Artificial Intelligence for onboard Fault Detection, Isolation and Recovery within the Attitude and Orbit Control Subsystem by introducing a framework that enhances interpretability in neural anomaly detectors. We propose a method to derive low-dimensional, semantically annotated encodings from intermediate neural activations, called peepholes. Applied to a convolutional autoencoder, the framework produces interpretable indicators that enable the identification and localization of anomalies in reaction-wheel telemetry. Peepholes analysis further reveals bias detection and supports fault localization. The proposed framework enables the semantic characterization of detected anomalies while requiring only a marginal increase in computational resources, thus supporting its feasibility for on-board deployment.