Nonconvex Penalized LAD Estimation in Partial Linear Models with DNNs: Asymptotic Analysis and Proximal Algorithms

๐Ÿ“… 2025-11-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the statistical inference challenges arising from LAD estimation and nonconvex, nonsmooth regularization in high-dimensional partially linear models. Methodologically, it proposes a novel deep neural network (DNN)-based estimation frameworkโ€”first embedding nonconvex penalties (e.g., SCAD, MCP) into DNN-driven partially linear models, coupled with least absolute deviations loss, and optimized efficiently via infinite-dimensional variational analysis and proximal subgradient algorithms. Theoretically, it establishes consistency, optimal convergence rates, and asymptotic normality of the estimators; proves convergence of the relaxed optimization problem; and characterizes the fundamental trade-off between computational efficiency and statistical accuracy induced by nonconvex regularization. By simultaneously accommodating model high-dimensionality, objective nonconvexity, loss nonsmoothness, and DNN architectural complexity, this work provides a unified framework for robust high-dimensional inference.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper investigates the partial linear model by Least Absolute Deviation (LAD) regression. We parameterize the nonparametric term using Deep Neural Networks (DNNs) and formulate a penalized LAD problem for estimation. Specifically, our model exhibits the following challenges. First, the regularization term can be nonconvex and nonsmooth, necessitating the introduction of infinite dimensional variational analysis and nonsmooth analysis into the asymptotic normality discussion. Second, our network must expand (in width, sparsity level and depth) as more samples are observed, thereby introducing additional difficulties for theoretical analysis. Third, the oracle of the proposed estimator is itself defined through a ultra high-dimensional, nonconvex, and discontinuous optimization problem, which already entails substantial computational and theoretical challenges. Under such the challenges, we establish the consistency, convergence rate, and asymptotic normality of the estimator. Furthermore, we analyze the oracle problem itself and its continuous relaxation. We study the convergence of a proximal subgradient method for both formulations, highlighting their structural differences lead to distinct computational subproblems along the iterations. In particular, the relaxed formulation admits significantly cheaper proximal updates, reflecting an inherent trade-off between statistical accuracy and computational tractability.
Problem

Research questions and friction points this paper is trying to address.

Estimating partial linear models with nonconvex penalties using DNNs
Addressing asymptotic normality under nonsmooth nonconvex regularization
Solving ultra high-dimensional discontinuous optimization for LAD estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LAD regression with DNNs in partial linear models
Nonconvex penalized estimation requiring asymptotic analysis
Proximal algorithms for high-dimensional discontinuous optimization
๐Ÿ”Ž Similar Papers
No similar papers found.
L
Lechen Feng
Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong
H
Haoran Li
Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong
L
Lucky Li
College of Computing, Data Science, and Society, University of California, Berkeley, CA 94720
Xingqiu Zhao
Xingqiu Zhao
The Hong Kong Polytechnic University
Survival Analysis