ADHAM: Additive Deep Hazard Analysis Mixtures for Interpretable Survival Regression

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Survival analysis in clinical decision-making demands both high predictive accuracy and model interpretability—yet most deep learning approaches sacrifice the latter. To address this, we propose the Additive Deep Risk Mixture (ADRM) model, which enables three-level interpretable survival modeling—population-, subgroup-, and individual-level—via conditional latent subgroup structure and covariate-specific additive risk functions. A neural network learns latent subgroup assignments, while post-hoc clustering enhances subgroup interpretability; equivalent subgroups are subsequently merged after training to improve robustness. Evaluated on multiple real-world medical datasets, ADRM achieves state-of-the-art predictive performance (measured by C-index and Integrated Brier Score) while uncovering heterogeneous effects of exposures across subgroups. This yields clinically actionable, interpretable risk attribution and supports stratified intervention strategies.

Technology Category

Application Category

📝 Abstract
Survival analysis is a fundamental tool for modeling time-to-event outcomes in healthcare. Recent advances have introduced flexible neural network approaches for improved predictive performance. However, most of these models do not provide interpretable insights into the association between exposures and the modeled outcomes, a critical requirement for decision-making in clinical practice. To address this limitation, we propose Additive Deep Hazard Analysis Mixtures (ADHAM), an interpretable additive survival model. ADHAM assumes a conditional latent structure that defines subgroups, each characterized by a combination of covariate-specific hazard functions. To select the number of subgroups, we introduce a post-training refinement that reduces the number of equivalent latent subgroups by merging similar groups. We perform comprehensive studies to demonstrate ADHAM's interpretability at the population, subgroup, and individual levels. Extensive experiments on real-world datasets show that ADHAM provides novel insights into the association between exposures and outcomes. Further, ADHAM remains on par with existing state-of-the-art survival baselines in terms of predictive performance, offering a scalable and interpretable approach to time-to-event prediction in healthcare.
Problem

Research questions and friction points this paper is trying to address.

Develops interpretable survival model for healthcare outcomes
Addresses lack of interpretability in neural network survival models
Identifies covariate-specific hazard patterns through subgroup analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretable additive survival model with latent subgroups
Post-training refinement merges similar latent subgroups
Maintains predictive performance with scalable interpretable approach
🔎 Similar Papers
No similar papers found.
M
Mert Ketenci
Department of Computer Science, Columbia University, New York, NY, USA
V
Vincent Jeanselme
Department of Biomedical Informatics, Columbia University, New York, NY, USA
H
Harry Reyes Nieva
Department of Biomedical Informatics, Columbia University, New York, NY, USA
Shalmali Joshi
Shalmali Joshi
Columbia University
Artificial IntelligenceMachine LearningBiomedical SciencesClinical Informatics
Noémie Elhadad
Noémie Elhadad
Associate Professor and Chair of Biomedical Informatics, Columbia University
machine learning for healthcarehealth informaticsnatural language processingbiomedical informaticswomen's health