🤖 AI Summary
Existing additive explanation methods struggle to capture high-order feature interactions and time-varying effects in survival models. To address this limitation, this work proposes SurvFD, a novel framework that explicitly distinguishes between time-varying and time-invariant high-order interactions. Building upon functional decomposition theory and Shapley interaction values, the authors introduce SurvSHAP-IQ—a time-indexed method enabling fine-grained, time-aware interpretability of survival models. This study reveals the failure mechanisms of additive explanations in survival analysis and establishes the first explanation framework capable of dynamically interpreting time-dependent interactions. The effectiveness and generalizability of SurvFD are validated across multiple time-to-event prediction tasks.
📝 Abstract
Hazard and survival functions are natural, interpretable targets in time-to-event prediction, but their inherent non-additivity fundamentally limits standard additive explanation methods. We introduce Survival Functional Decomposition (SurvFD), a principled approach for analyzing feature interactions in machine learning survival models. By decomposing higher-order effects into time-dependent and time-independent components, SurvFD offers a previously unrecognized perspective on survival explanations, explicitly characterizing when and why additive explanations fail. Building on this theoretical decomposition, we propose SurvSHAP-IQ, which extends Shapley interactions to time-indexed functions, providing a practical estimator for higher-order, time-dependent interactions. Together, SurvFD and SurvSHAP-IQ establish an interaction- and time-aware interpretability approach for survival modeling, with broad applicability across time-to-event prediction tasks.