Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry

📅 2025-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing theoretical analyses of neural network feature learning oversimplify the “lazy/feature-rich” dichotomy and neglect the geometric structure of learned representations. Method: We propose a task-aware manifold-geometric evolution framework that introduces manifold disentanglement—as a core observable—to dynamically track the geometric transformation of representation manifolds during training, combining rigorous theoretical modeling with large-scale empirical validation. Contributions/Results: (1) Feature learning exhibits multi-stage geometric evolution, where disentanglement strongly correlates with training phase and hyperparameters; (2) We define novel, fine-grained feature-learning subtypes that transcend the lazy/feature-rich dichotomy; (3) Our framework provides interpretable geometric mechanisms for out-of-distribution generalization and cognitive neural circuit inductive biases. Collectively, these results unify the characterization of feature-learning diversity across artificial and biological neural networks.

Technology Category

Application Category

📝 Abstract
The ability to integrate task-relevant information into neural representations is a fundamental aspect of both biological and artificial intelligence. To enable theoretical analysis, recent work has examined whether a network learns task-relevant features (rich learning) or resembles a random feature model (or a kernel machine, i.e., lazy learning). However, this simple lazy-versus-rich dichotomy overlooks the possibility of various subtypes of feature learning that emerge from different architectures, learning rules, and data properties. Furthermore, most existing approaches emphasize weight matrices or neural tangent kernels, limiting their applicability to neuroscience because they do not explicitly characterize representations. In this work, we introduce an analysis framework based on representational geometry to study feature learning. Instead of analyzing what are the learned features, we focus on characterizing how task-relevant representational manifolds evolve during the learning process. In both theory and experiment, we find that when a network learns features useful for solving a task, the task-relevant manifolds become increasingly untangled. Moreover, by tracking changes in the underlying manifold geometry, we uncover distinct learning stages throughout training, as well as different learning strategies associated with training hyperparameters, uncovering subtypes of feature learning beyond the lazy-versus-rich dichotomy. Applying our method to neuroscience and machine learning, we gain geometric insights into the structural inductive biases of neural circuits solving cognitive tasks and the mechanisms underlying out-of-distribution generalization in image classification. Our framework provides a novel geometric perspective for understanding and quantifying feature learning in both artificial and biological neural networks.
Problem

Research questions and friction points this paper is trying to address.

Analyzes feature learning subtypes beyond lazy-rich dichotomy
Studies task-relevant representational manifold evolution during learning
Provides geometric insights into neural circuits and generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analysis framework based on representational geometry
Tracking task-relevant manifold untangling during learning
Uncovering distinct learning stages and strategies
🔎 Similar Papers
No similar papers found.