🤖 AI Summary
This work addresses the conflation of causation and correlation in time-series classification interpretability, proposing the first concept-level segment-oriented causal attribution framework. Methodologically, it introduces a high-fidelity counterfactual generator based on diffusion models, enabling concept-level interventions on predefined temporal segments and rigorously estimating their causal effects on classification outcomes. Unlike conventional correlation-based attribution methods, this framework explicitly models time-series segmentation from a causal inference perspective, distinguishing causal pathways from spurious statistical associations. Experiments across multiple time-series classification tasks demonstrate that the proposed causal attribution reliably identifies critical decision-relevant segments, whereas mainstream correlation-based approaches frequently overlook true causal mechanisms—leading to systematic misattribution. The framework thus advances interpretable time-series classification by grounding explanations in causal reasoning rather than mere association.
📝 Abstract
Despite the excelling performance of machine learning models, understanding the decisions of machine learning models remains a long-standing goal. While commonly used attribution methods in explainable AI attempt to address this issue, they typically rely on associational rather than causal relationships. In this study, within the context of time series classification, we introduce a novel framework to assess the causal effect of concepts, i.e., predefined segments within a time series, on specific classification outcomes. To achieve this, we leverage state-of-the-art diffusion-based generative models to estimate counterfactual outcomes. Our approach compares these causal attributions with closely related associational attributions, both theoretically and empirically. We demonstrate the insights gained by our approach for a diverse set of qualitatively different time series classification tasks. Although causal and associational attributions might often share some similarities, in all cases they differ in important details, underscoring the risks associated with drawing causal conclusions from associational data alone. We believe that the proposed approach is widely applicable also in other domains, particularly where predefined segmentations are available, to shed some light on the limits of associational attributions.