🤖 AI Summary
This work investigates how deep neural networks can efficiently learn hierarchical label structures in supervised learning settings. Focusing on unknown hierarchical relationships where higher-level labels are recursively defined by lower-level ones, the paper introduces a class of deep hierarchical models that can be efficiently learned by deep networks, extending beyond prior limitations to logarithmic-depth computable models. The approach integrates residual network architectures with layer-wise stochastic gradient descent (SGD) optimization and employs a simplified teacher model for formal analysis. Theoretically, the authors prove that this model class is efficiently learnable at polynomial depths, thereby establishing a formal connection between human pedagogical behavior and the learnability of hierarchical structures, and offering a new theoretical foundation for understanding the empirical success of deep learning.
📝 Abstract
We consider supervised learning with $n$ labels and show that layerwise SGD on residual networks can efficiently learn a class of hierarchical models. This model class assumes the existence of an (unknown) label hierarchy $L_1 \subseteq L_2 \subseteq \dots \subseteq L_r = [n]$, where labels in $L_1$ are simple functions of the input, while for $i>1$, labels in $L_i$ are simple functions of simpler labels. Our class surpasses models that were previously shown to be learnable by deep learning algorithms, in the sense that it reaches the depth limit of efficient learnability. That is, there are models in this class that require polynomial depth to express, whereas previous models can be computed by log-depth circuits. Furthermore, we suggest that learnability of such hierarchical models might eventually form a basis for understanding deep learning. Beyond their natural fit for domains where deep learning excels, we argue that the mere existence of human ``teachers"supports the hypothesis that hierarchical structures are inherently available. By providing granular labels, teachers effectively reveal ``hints''or ``snippets''of the internal algorithms used by the brain. We formalize this intuition, showing that in a simplified model where a teacher is partially aware of their internal logic, a hierarchical structure emerges that facilitates efficient learnability.