Fundamental computational limits of weak learnability in high-dimensional multi-index models

📅 2024-05-24
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the minimal sample complexity required for first-order iterative algorithms—such as gradient descent—to weakly learn low-dimensional subspace structures in high-dimensional multi-index models. Under the canonical high-dimensional regime where the sample size scales as $n = alpha d$ ($d$ being the covariate dimension), we rigorously characterize the computational learnability phase transition boundary using the Approximate Message Passing (AMP) framework. We identify three novel phase transitions: (1) one-step learnability of trivial subspaces; (2) an “easy/hard” directional transition demarcated by a critical ratio $alpha_c$; and (3) a stepwise hierarchical learning structure induced by directional coupling. Notably, parity-like directions drive $alpha_c o infty$, exposing an intrinsic source of computational hardness. Our results unify the learning limits of diverse first-order methods—including shallow neural networks—and propose a “large staircase” learning paradigm, delivering the first systematic computational phase transition theory for high-dimensional structural learning.

Technology Category

Application Category

📝 Abstract
Multi-index models - functions which only depend on the covariates through a non-linear transformation of their projection on a subspace - are a useful benchmark for investigating feature learning with neural nets. This paper examines the theoretical boundaries of efficient learnability in this hypothesis class, focusing on the minimum sample complexity required for weakly recovering their low-dimensional structure with first-order iterative algorithms, in the high-dimensional regime where the number of samples $n!=!alpha d$ is proportional to the covariate dimension $d$. Our findings unfold in three parts: (i) we identify under which conditions a trivial subspace can be learned with a single step of a first-order algorithm for any $alpha!>!0$; (ii) if the trivial subspace is empty, we provide necessary and sufficient conditions for the existence of an easy subspace where directions that can be learned only above a certain sample complexity $alpha!>!alpha_c$, where $alpha_{c}$ marks a computational phase transition. In a limited but interesting set of really hard directions -- akin to the parity problem -- $alpha_c$ is found to diverge. Finally, (iii) we show that interactions between different directions can result in an intricate hierarchical learning phenomenon, where directions can be learned sequentially when coupled to easier ones. We discuss in detail the grand staircase picture associated to these functions (and contrast it with the original staircase one). Our theory builds on the optimality of approximate message-passing among first-order iterative methods, delineating the fundamental learnability limit across a broad spectrum of algorithms, including neural networks trained with gradient descent, which we discuss in this context.
Problem

Research questions and friction points this paper is trying to address.

Identifies conditions for learning trivial subspaces with first-order algorithms
Determines sample complexity thresholds for learning non-trivial subspaces
Analyzes hierarchical learning in multi-index models with coupled directions
Innovation

Methods, ideas, or system contributions that make the work stand out.

First-order iterative algorithms for weak learnability
Phase transition in sample complexity for easy subspaces
Hierarchical learning via direction interactions
🔎 Similar Papers
No similar papers found.