Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination

📅 2023-11-06
🏛️ arXiv.org
📈 Citations: 18
Influential: 2
📄 PDF
🤖 AI Summary
This work investigates the fundamental mechanisms underlying hierarchical representation learning in deep neural networks. Addressing the central question—“how do features evolve across layers”—we propose a joint quantification framework for inter-layer feature compression ratio and discriminability. We theoretically uncover, for the first time, a geometric–linear dual-rate pattern of feature evolution in deep linear networks: intra-class features contract geometrically, while inter-class discriminability increases linearly. This pattern is rigorously established under minimal norm, weight balancing, and near-low-rank assumptions, and extended to nonlinear networks via intermediate-feature modeling for multi-class classification. Numerical experiments validate its robustness across architectures and datasets. Our results provide an interpretable theoretical foundation for representation learning and yield quantitative guidance for layer selection in transfer learning and knowledge distillation.
📝 Abstract
Over the past decade, deep learning has proven to be a highly effective tool for learning meaningful features from raw data. However, it remains an open question how deep networks perform hierarchical feature learning across layers. In this work, we attempt to unveil this mystery by investigating the structures of intermediate features. Motivated by our empirical findings that linear layers mimic the roles of deep layers in nonlinear networks for feature learning, we explore how deep linear networks transform input data into output by investigating the output (i.e., features) of each layer after training in the context of multi-class classification problems. Toward this goal, we first define metrics to measure within-class compression and between-class discrimination of intermediate features, respectively. Through theoretical analysis of these two metrics, we show that the evolution of features follows a simple and quantitative pattern from shallow to deep layers when the input data is nearly orthogonal and the network weights are minimum-norm, balanced, and approximate low-rank: Each layer of the linear network progressively compresses within-class features at a geometric rate and discriminates between-class features at a linear rate with respect to the number of layers that data have passed through. To the best of our knowledge, this is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks. Empirically, our extensive experiments not only validate our theoretical results numerically but also reveal a similar pattern in deep nonlinear networks which aligns well with recent empirical studies. Moreover, we demonstrate the practical implications of our results in transfer learning. Our code is available at https://github.com/Heimine/PNC_DLN.
Problem

Research questions and friction points this paper is trying to address.

How deep networks perform hierarchical feature learning across layers
Measure within-class compression and between-class discrimination in features
Characterize feature evolution in deep linear and nonlinear networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layerwise feature compression and discrimination metrics
Deep linear networks for feature evolution analysis
Theoretical and empirical validation in nonlinear networks
🔎 Similar Papers
No similar papers found.
P
Peng Wang
Department of Electrical Engineering & Computer Science, University of Michigan
X
Xiao Li
Department of Electrical Engineering & Computer Science, University of Michigan
Can Yaras
Can Yaras
PhD Student, University of Michigan
Deep LearningOptimization
Zhihui Zhu
Zhihui Zhu
Assistant Professor, Ohio State University
Machine LearningData ScienceSignal ProcessingOptimization
Laura Balzano
Laura Balzano
University of Michigan, Ann Arbor
matrix factorizationmatrix completionmanifold optimizationnonconvex optimization
W
Wei Hu
Department of Electrical Engineering & Computer Science, University of Michigan; Michigan Institute for Data Science, University of Michigan
Qing Qu
Qing Qu
Assistant Professor, Dept. of EECS, University of Michigan
Machine LearningNonconvex OptimizationHigh Dimensional Data AnalysisDeep Learning Theory