🤖 AI Summary
This work investigates how neural networks learn structured operations over sequential data by introducing the “sequential group composition” task—given a sequence of elements from a finite group, predict their cumulative product. Leveraging group representation theory and Fourier analysis, the authors combine theoretical proofs with empirical experiments on recurrent neural networks (RNNs) and multilayer feedforward architectures. They demonstrate that a two-layer network requires hidden width exponential in sequence length to solve the task perfectly, whereas RNNs exploit k-step sequential composition and deep feedforward networks achieve logarithmic depth via parallel pairwise grouping, both efficiently harnessing the associativity of group operations to drastically reduce complexity. This study provides the first rigorous characterization of how deep architectures leverage algebraic structure for efficient sequential computation.
📝 Abstract
How do neural networks trained over sequences acquire the ability to perform structured operations, such as arithmetic, geometric, and algorithmic computation? To gain insight into this question, we introduce the sequential group composition task. In this task, networks receive a sequence of elements from a finite group encoded in a real vector space and must predict their cumulative product. The task can be order-sensitive and requires a nonlinear architecture to be learned. Our analysis isolates the roles of the group structure, encoding statistics, and sequence length in shaping learning. We prove that two-layer networks learn this task one irreducible representation of the group at a time in an order determined by the Fourier statistics of the encoding. These networks can perfectly learn the task, but doing so requires a hidden width exponential in the sequence length $k$. In contrast, we show how deeper models exploit the associativity of the task to dramatically improve this scaling: recurrent neural networks compose elements sequentially in $k$ steps, while multilayer networks compose adjacent pairs in parallel in $\log k$ layers. Overall, the sequential group composition task offers a tractable window into the mechanics of deep learning.