A Noise Sensitivity Exponent Controls Large Statistical-to-Computational Gaps in Single- and Multi-Index Models

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the origins of the statistical-to-computational gap in high-dimensional single-index and multi-index models. By introducing the Noise Sensitivity Exponent (NSE) to characterize activation functions, the study systematically analyzes how NSE governs the statistical-computational gap under high noise in single-index models, during feature specialization in separable multi-index models, and within hierarchical multi-index models. The paper establishes, for the first time, that NSE serves as a unifying parameter capturing noise robustness, computational hardness, and feature specialization in high-dimensional learning. It rigorously proves that NSE precisely determines both the existence and magnitude of the statistical-computational gap across these three model classes, thereby positioning NSE as a universal tuning parameter for high-dimensional learning problems.

Technology Category

Application Category

📝 Abstract
Understanding when learning is statistically possible yet computationally hard is a central challenge in high-dimensional statistics. In this work, we investigate this question in the context of single- and multi-index models, classes of functions widely studied as benchmarks to probe the ability of machine learning methods to discover features in high-dimensional data. Our main contribution is to show that a Noise Sensitivity Exponent (NSE) - a simple quantity determined by the activation function - governs the existence and magnitude of statistical-to-computational gaps within a broad regime of these models. We first establish that, in single-index models with large additive noise, the onset of a computational bottleneck is fully characterized by the NSE. We then demonstrate that the same exponent controls a statistical-computational gap in the specialization transition of large separable multi-index models, where individual components become learnable. Finally, in hierarchical multi-index models, we show that the NSE governs the optimal computational rate in which different directions are sequentially learned. Taken together, our results identify the NSE as a unifying property linking noise robustness, computational hardness, and feature specialization in high-dimensional learning.
Problem

Research questions and friction points this paper is trying to address.

statistical-to-computational gaps
single-index models
multi-index models
noise sensitivity
high-dimensional learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Noise Sensitivity Exponent
statistical-to-computational gap
single-index models
multi-index models
feature specialization
🔎 Similar Papers
No similar papers found.