🤖 AI Summary
Existing interpretability methods—gradient-based, masking-based, and permutation-based—exhibit instability and unreliability in dynamic time-series forecasting tasks (e.g., ICU monitoring) due to their failure to account for time-varying dependencies and temporal smoothness. Method: We propose a novel explainability framework based on learnable time-varying masks. It jointly optimizes feature importance masks in an end-to-end manner while enforcing explicit temporal continuity constraints and label-consistency regularization to model the evolution of feature relevance along dynamic trajectories. Results: Extensive experiments across multiple deep time-series models and real-world ICU datasets demonstrate that conventional methods suffer from severe temporal inconsistency and noise sensitivity. In contrast, our framework yields explanations that are both temporally stable and clinically interpretable, significantly enhancing model trustworthiness and practical utility in high-risk decision-making scenarios.
📝 Abstract
Interpretability plays a vital role in aligning and deploying deep learning models in critical care, especially in constantly evolving conditions that influence patient survival. However, common interpretability algorithms face unique challenges when applied to dynamic prediction tasks, where patient trajectories evolve over time. Gradient, Occlusion, and Permutation-based methods often struggle with time-varying target dependency and temporal smoothness. This work systematically analyzes these failure modes and supports learnable mask-based interpretability frameworks as alternatives, which can incorporate temporal continuity and label consistency constraints to learn feature importance over time. Here, we propose that learnable mask-based approaches for dynamic timeseries prediction problems provide more reliable and consistent interpretations for applications in critical care and similar domains.