Structure-Preserving Nonlinear Sufficient Dimension Reduction for Tensors

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
We address regression with tensor-valued predictors. We propose two nonlinear sufficient dimension reduction (SDR) methods—based respectively on Tucker and CP decompositions—that preserve both the intrinsic multimodal structure and semantic interpretability of tensor modes. Our key contribution is the first formulation of nonlinear SDR that jointly maintains the original tensor structure and mode-wise interpretability. Theoretically, we establish Fisher consistency, statistical consistency, and convergence rate guarantees. Algorithmically, our approach integrates least-squares sequential optimization with singular value decomposition to balance geometric fidelity and model parsimony. Extensive simulations and two real-data experiments demonstrate that our methods reduce parameter count by approximately one order of magnitude, while achieving significantly higher estimation accuracy and predictive performance than state-of-the-art alternatives. Moreover, they deliver strong interpretability and computational efficiency.

Technology Category

Application Category

📝 Abstract
We introduce two nonlinear sufficient dimension reduction methods for regressions with tensor-valued predictors. Our goal is two-fold: the first is to preserve the tensor structure when performing dimension reduction, particularly the meaning of the tensor modes, for improved interpretation; the second is to substantially reduce the number of parameters in dimension reduction, thereby achieving model parsimony and enhancing estimation accuracy. Our two tensor dimension reduction methods echo the two commonly used tensor decomposition mechanisms: one is the Tucker decomposition, which reduces a larger tensor to a smaller one; the other is the CP-decomposition, which represents an arbitrary tensor as a sequence of rank-one tensors. We developed the Fisher consistency of our methods at the population level and established their consistency and convergence rates. Both methods are easy to implement numerically: the Tucker-form can be implemented through a sequence of least-squares steps, and the CP-form can be implemented through a sequence of singular value decompositions. We investigated the finite-sample performance of our methods and showed substantial improvement in accuracy over existing methods in simulations and two data applications.
Problem

Research questions and friction points this paper is trying to address.

Develops nonlinear dimension reduction for tensor predictors preserving structure
Reduces parameter count for model parsimony and estimation accuracy
Provides two methods based on Tucker and CP tensor decompositions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nonlinear dimension reduction preserving tensor structure
Methods based on Tucker and CP tensor decompositions
Implemented via least-squares and singular value decompositions
🔎 Similar Papers
No similar papers found.
D
Dianjun Lin
Department of Statistics, The Pennsylvania State University
B
Bing Li
Department of Statistics, The Pennsylvania State University
Lingzhou Xue
Lingzhou Xue
Professor of Statistics, The Pennsylvania State University
High Dimensional StatisticsStatistical LearningStatistical Network AnalysisNonconvex OptimizationData Science