Optimizing Cox Models with Stochastic Gradient Descent: Theoretical Foundations and Practical Guidances

📅 2024-08-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of statistical foundation for stochastic gradient descent (SGD) optimizing mini-batch partial likelihood—rather than the standard partial likelihood—in deep Cox neural networks. Methodologically, it introduces the mini-batch maximum partial likelihood estimator (mb-MPLE) and systematically characterizes how batch size affects estimation consistency, convergence rate, and asymptotic efficiency. Theoretical contributions include: (1) establishing that the SGD estimator in Cox-NN achieves the optimal minimax convergence rate (up to logarithmic factors); (2) rigorously proving √n-consistency and asymptotic normality in classical Cox regression; and (3) quantifying the trade-off between batch size and statistical efficiency, yielding a practical batch-size selection criterion. Experiments demonstrate that, guided by this theory, SGD significantly outperforms full-batch gradient descent on massive survival datasets, achieving both faster convergence and superior statistical validity.

Technology Category

Application Category

📝 Abstract
Optimizing Cox regression and its neural network variants poses substantial computational challenges in large-scale studies. Stochastic gradient descent (SGD), known for its scalability in model optimization, has recently been adapted to optimize Cox models. Unlike its conventional application, which typically targets a sum of independent individual loss, SGD for Cox models updates parameters based on the partial likelihood of a subset of data. Despite its empirical success, the theoretical foundation for optimizing Cox partial likelihood with SGD is largely underexplored. In this work, we demonstrate that the SGD estimator targets an objective function that is batch-size-dependent. We establish that the SGD estimator for the Cox neural network (Cox-NN) is consistent and achieves the optimal minimax convergence rate up to a polylogarithmic factor. For Cox regression, we further prove the $sqrt{n}$-consistency and asymptotic normality of the SGD estimator, with variance depending on the batch size. Furthermore, we quantify the impact of batch size on Cox-NN training and its effect on the SGD estimator's asymptotic efficiency in Cox regression. These findings are validated by extensive numerical experiments and provide guidance for selecting batch sizes in SGD applications. Finally, we demonstrate the effectiveness of SGD in a real-world application where GD is unfeasible due to the large scale of data.
Problem

Research questions and friction points this paper is trying to address.

Establishes statistical properties for mini-batch Cox model optimization
Demonstrates critical learning rate to batch size ratio in SGD
Enables large-scale survival analysis where standard methods fail
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mini-batch maximum partial-likelihood estimator for Cox models
Established statistical consistency and optimal convergence rates
Learning rate to batch size ratio critical for SGD dynamics
🔎 Similar Papers
No similar papers found.