🤖 AI Summary
This work addresses the challenge of integrating censored survival data with general-purpose machine learning tools. We propose a systematic dimensionality-reduction framework that transforms survival analysis into standard regression or classification tasks while explicitly preserving censoring information. Methodologically, the framework integrates hazard-function transformation, time discretization (temporal binning), and censoring-aware sample weighting to achieve structure-preserving, fidelity-guaranteed reduction. It is fully compatible with mainstream supervised learning models and supports end-to-end training. Our key contributions are threefold: (i) the first unified, modular mapping paradigm from survival tasks to standard supervised learning tasks; (ii) an open-source, standardized implementation; and (iii) competitive or superior performance against state-of-the-art dedicated survival models across multiple benchmark datasets—demonstrating significantly enhanced modeling flexibility, scalability, and engineering deployability.
📝 Abstract
In this work, we discuss what we refer to as reduction techniques for survival analysis, that is, techniques that "reduce" a survival task to a more common regression or classification task, without ignoring the specifics of survival data. Such techniques particularly facilitate machine learning-based survival analysis, as they allow for applying standard tools from machine and deep learning to many survival tasks without requiring custom learners. We provide an overview of different reduction techniques and discuss their respective strengths and weaknesses. We also provide a principled implementation of some of these reductions, such that they are directly available within standard machine learning workflows. We illustrate each reduction using dedicated examples and perform a benchmark analysis that compares their predictive performance to established machine learning methods for survival analysis.