🤖 AI Summary
Functional survival trees (FST) and functional random survival forests (FRSF) achieve strong predictive performance in functional time-to-event analysis but suffer from poor interpretability, limiting their clinical adoption and decision-support utility. To address this, we propose the first visual interpretability framework specifically designed for functional survival models, integrating piecewise basis function expansion, path-wise importance attribution, and local surrogate visualization to jointly enhance tree-level structural readability of FST and forest-level decision traceability of FRSF. Extensive evaluation on simulated and real-world datasets demonstrates that our framework preserves high prediction accuracy while substantially improving human interpretability: FST yields concise, intuitive risk stratifications; FRSF explanations align closely with underlying risk mechanisms. This work establishes a new paradigm for functional survival analysis that simultaneously ensures reliability and transparency.
📝 Abstract
Functional survival models are key tools for analyzing time-to-event data with complex predictors, such as functional or high-dimensional inputs. Despite their predictive strength, these models often lack interpretability, which limits their value in practical decision-making and risk analysis. This study investigates two key survival models: the Functional Survival Tree (FST) and the Functional Random Survival Forest (FRSF). It introduces novel methods and tools to enhance the interpretability of FST models and improve the explainability of FRSF ensembles. Using both real and simulated datasets, the results demonstrate that the proposed approaches yield efficient, easy-to-understand decision trees that accurately capture the underlying decision-making processes of the model ensemble.