A Self-explainable Model of Long Time Series by Extracting Informative Structured Causal Patterns

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing interpretable AI methods for long time series yield only discrete importance scores, failing to capture intrinsic temporal structures—such as trends, periodicities, and state transitions—thereby undermining model trustworthiness. To address this, we propose EXCAP, the first framework integrating attention-driven sequence segmentation, a pretrained causal graph–guided decoder, latent-variable aggregation, and causal masking–based robust optimization, enabling continuous, smooth, and causally consistent temporal pattern extraction. EXCAP satisfies four key desiderata: temporal continuity, pattern centrality, causal disentanglement, and inference fidelity—supporting human-readable modeling of trends, cycles, and mechanistic shifts. Experiments demonstrate that EXCAP achieves state-of-the-art performance in both classification and forecasting tasks, while generating explanations that are both temporally coherent and causally grounded. Its robust interpretability shows strong promise for high-stakes domains including healthcare and finance.

Technology Category

Application Category

📝 Abstract
Explainability is essential for neural networks that model long time series, yet most existing explainable AI methods only produce point-wise importance scores and fail to capture temporal structures such as trends, cycles, and regime changes. This limitation weakens human interpretability and trust in long-horizon models. To address these issues, we identify four key requirements for interpretable time-series modeling: temporal continuity, pattern-centric explanation, causal disentanglement, and faithfulness to the model's inference process. We propose EXCAP, a unified framework that satisfies all four requirements. EXCAP combines an attention-based segmenter that extracts coherent temporal patterns, a causally structured decoder guided by a pre-trained causal graph, and a latent aggregation mechanism that enforces representation stability. Our theoretical analysis shows that EXCAP provides smooth and stable explanations over time and is robust to perturbations in causal masks. Extensive experiments on classification and forecasting benchmarks demonstrate that EXCAP achieves strong predictive accuracy while generating coherent and causally grounded explanations. These results show that EXCAP offers a principled and scalable approach to interpretable modeling of long time series with relevance to high-stakes domains such as healthcare and finance.
Problem

Research questions and friction points this paper is trying to address.

Extracts structured causal patterns from long time series
Addresses limitations of point-wise importance scores in explainability
Ensures temporal continuity and faithfulness in model explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-based segmenter extracts coherent temporal patterns
Causally structured decoder uses pre-trained causal graph
Latent aggregation mechanism enforces representation stability
🔎 Similar Papers
No similar papers found.