🤖 AI Summary
Learning low-dimensional correlated structures underlying multi-exponential models from high-dimensional noisy data remains challenging for two-layer neural networks.
Method: We propose an efficient gradient-based learning mechanism leveraging repeated data passes.
Contributions/Results: We theoretically establish that near-uniform directional learning is achievable in $O(d log d)$ iterations using only two full data passes—surpassing conventional convergence lower bounds imposed by information bottlenecks and jump exponents. We introduce a hierarchical learning framework under directional coupling, generalizing step-function analysis to hard function classes such as sparse parities. By rigorously modeling high-dimensional asymptotic training dynamics, analyzing multi-exponential target functions, and proving convergence of both single-pass and repeated SGD, our method achieves significantly improved generalization on coupled multi-exponential functions—without requiring preprocessing or prior feature engineering.
📝 Abstract
Neural networks can identify low-dimensional relevant structures within high-dimensional noisy data, yet our mathematical understanding of how they do so remains scarce. Here, we investigate the training dynamics of two-layer shallow neural networks trained with gradient-based algorithms, and discuss how they learn pertinent features in multi-index models, that is target functions with low-dimensional relevant directions. In the high-dimensional regime, where the input dimension $d$ diverges, we show that a simple modification of the idealized single-pass gradient descent training scenario, where data can now be repeated or iterated upon twice, drastically improves its computational efficiency. In particular, it surpasses the limitations previously believed to be dictated by the Information and Leap exponents associated with the target function to be learned. Our results highlight the ability of networks to learn relevant structures from data alone without any pre-processing. More precisely, we show that (almost) all directions are learned with at most $O(d log d)$ steps. Among the exceptions is a set of hard functions that includes sparse parities. In the presence of coupling between directions, however, these can be learned sequentially through a hierarchical mechanism that generalizes the notion of staircase functions. Our results are proven by a rigorous study of the evolution of the relevant statistics for high-dimensional dynamics.