🤖 AI Summary
Existing interpretable AI methods for long time series yield only discrete importance scores, failing to capture intrinsic temporal structures—such as trends, periodicities, and state transitions—thereby undermining model trustworthiness. To address this, we propose EXCAP, the first framework integrating attention-driven sequence segmentation, a pretrained causal graph–guided decoder, latent-variable aggregation, and causal masking–based robust optimization, enabling continuous, smooth, and causally consistent temporal pattern extraction. EXCAP satisfies four key desiderata: temporal continuity, pattern centrality, causal disentanglement, and inference fidelity—supporting human-readable modeling of trends, cycles, and mechanistic shifts. Experiments demonstrate that EXCAP achieves state-of-the-art performance in both classification and forecasting tasks, while generating explanations that are both temporally coherent and causally grounded. Its robust interpretability shows strong promise for high-stakes domains including healthcare and finance.
📝 Abstract
Explainability is essential for neural networks that model long time series, yet most existing explainable AI methods only produce point-wise importance scores and fail to capture temporal structures such as trends, cycles, and regime changes. This limitation weakens human interpretability and trust in long-horizon models. To address these issues, we identify four key requirements for interpretable time-series modeling: temporal continuity, pattern-centric explanation, causal disentanglement, and faithfulness to the model's inference process. We propose EXCAP, a unified framework that satisfies all four requirements. EXCAP combines an attention-based segmenter that extracts coherent temporal patterns, a causally structured decoder guided by a pre-trained causal graph, and a latent aggregation mechanism that enforces representation stability. Our theoretical analysis shows that EXCAP provides smooth and stable explanations over time and is robust to perturbations in causal masks. Extensive experiments on classification and forecasting benchmarks demonstrate that EXCAP achieves strong predictive accuracy while generating coherent and causally grounded explanations. These results show that EXCAP offers a principled and scalable approach to interpretable modeling of long time series with relevance to high-stakes domains such as healthcare and finance.