Enhancing Visual Interpretability and Explainability in Functional Survival Trees and Forests

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Functional survival trees (FST) and functional random survival forests (FRSF) achieve strong predictive performance in functional time-to-event analysis but suffer from poor interpretability, limiting their clinical adoption and decision-support utility. To address this, we propose the first visual interpretability framework specifically designed for functional survival models, integrating piecewise basis function expansion, path-wise importance attribution, and local surrogate visualization to jointly enhance tree-level structural readability of FST and forest-level decision traceability of FRSF. Extensive evaluation on simulated and real-world datasets demonstrates that our framework preserves high prediction accuracy while substantially improving human interpretability: FST yields concise, intuitive risk stratifications; FRSF explanations align closely with underlying risk mechanisms. This work establishes a new paradigm for functional survival analysis that simultaneously ensures reliability and transparency.

Technology Category

Application Category

📝 Abstract
Functional survival models are key tools for analyzing time-to-event data with complex predictors, such as functional or high-dimensional inputs. Despite their predictive strength, these models often lack interpretability, which limits their value in practical decision-making and risk analysis. This study investigates two key survival models: the Functional Survival Tree (FST) and the Functional Random Survival Forest (FRSF). It introduces novel methods and tools to enhance the interpretability of FST models and improve the explainability of FRSF ensembles. Using both real and simulated datasets, the results demonstrate that the proposed approaches yield efficient, easy-to-understand decision trees that accurately capture the underlying decision-making processes of the model ensemble.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability of Functional Survival Tree models
Improving explainability of Functional Random Survival Forests
Simplifying decision trees for time-to-event data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances interpretability of Functional Survival Trees
Improves explainability of Random Survival Forests
Produces easy-to-understand efficient decision trees
🔎 Similar Papers
No similar papers found.
G
Giuseppe Loffredo
Department of Mathematics and Physics, University of Campania Luigi Vanvitelli, Caserta, Italy
E
Elvira Romano
Department of Mathematics and Physics, University of Campania Luigi Vanvitelli, Caserta, Italy
Fabrizio Maturo
Fabrizio Maturo
Full Professor in Statistics, Head of the Faculty of Technological and Innovation Sciences
StatisticsBiostatisticsStatistical LearningData ScienceBusiness Statistics