Feature learning is decoupled from generalization in high capacity neural networks

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature learning theory emphasizes learning “strength” while neglecting the intrinsic “quality” of learned features, limiting theoretical explanations of generalization. This paper introduces the novel concept of *feature quality*, explicitly distinguishing it from learning strength, and establishes an integrated empirical-theoretical framework to systematically compare feature learning dynamics between neural networks and kernel methods on benchmark tasks such as staircase functions. Results demonstrate that although neural networks exhibit stronger feature learning capacity, their generalization advantage does not necessarily correlate with higher learning strength—revealing a fundamental decoupling between feature learning strength and generalization performance. This work provides both a foundational conceptual framework and empirical evidence for developing a generalization theory centered on feature quality.

Technology Category

Application Category

📝 Abstract
Neural networks outperform kernel methods, sometimes by orders of magnitude, e.g. on staircase functions. This advantage stems from the ability of neural networks to learn features, adapting their hidden representations to better capture the data. We introduce a concept we call feature quality to measure this performance improvement. We examine existing theories of feature learning and demonstrate empirically that they primarily assess the strength of feature learning, rather than the quality of the learned features themselves. Consequently, current theories of feature learning do not provide a sufficient foundation for developing theories of neural network generalization.
Problem

Research questions and friction points this paper is trying to address.

Neural networks outperform kernel methods in feature learning
Current theories assess feature learning strength, not feature quality
Existing theories lack foundation for neural network generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupling feature learning from generalization
Introducing feature quality measurement
Evaluating feature learning theories empirically
🔎 Similar Papers