Fourier Circuits in Neural Networks and Transformers: A Case Study of Modular Arithmetic with Multiple Inputs

📅 2024-02-12
📈 Citations: 13
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the algebraic task of $k$-input modular addition modulo $p$, analyzing the intrinsic computational mechanisms of neural networks and single-layer Transformers. Method: Leveraging Fourier analysis and margin maximization theory, we derive rigorous theoretical bounds and characterize spectral structures in both models. Contribution/Results: We establish the first tight lower bound—namely, $2^{2k-2}(p-1)$ neurons—for optimal generalization of single-hidden-layer networks on this task. Each hidden unit is provably aligned with a unique Fourier basis function, and its activation is governed by the task’s spectral structure. Crucially, the self-attention matrix in single-layer Transformers exhibits identical spectral organization. Our analysis quantifies the precise relationship between network width and generalization capacity, proves a bijective mapping between hidden units and Fourier modes, and unifies the computational paradigm underlying modular arithmetic representation learning in both architectures—validated through theoretical derivation and empirical experiments.

Technology Category

Application Category

📝 Abstract
In the evolving landscape of machine learning, a pivotal challenge lies in deciphering the internal representations harnessed by neural networks and Transformers. Building on recent progress toward comprehending how networks execute distinct target functions, our study embarks on an exploration of the underlying reasons behind networks adopting specific computational strategies. We direct our focus to the complex algebraic learning task of modular addition involving $k$ inputs. Our research presents a thorough analytical characterization of the features learned by stylized one-hidden layer neural networks and one-layer Transformers in addressing this task. A cornerstone of our theoretical framework is the elucidation of how the principle of margin maximization shapes the features adopted by one-hidden layer neural networks. Let $p$ denote the modulus, $D_p$ denote the dataset of modular arithmetic with $k$ inputs and $m$ denote the network width. We demonstrate that a neuron count of $ m geq 2^{2k-2} cdot (p-1) $, these networks attain a maximum $ L_{2,k+1} $-margin on the dataset $ D_p $. Furthermore, we establish that each hidden-layer neuron aligns with a specific Fourier spectrum, integral to solving modular addition problems. By correlating our findings with the empirical observations of similar studies, we contribute to a deeper comprehension of the intrinsic computational mechanisms of neural networks. Furthermore, we observe similar computational mechanisms in attention matrices of one-layer Transformers. Our work stands as a significant stride in unraveling their operation complexities, particularly in the realm of complex algebraic tasks.
Problem

Research questions and friction points this paper is trying to address.

Understanding internal representations in neural networks and Transformers.
Analyzing computational strategies for modular addition with multiple inputs.
Exploring margin maximization's role in feature learning in neural networks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explores neural networks and Transformers for modular arithmetic.
Links neuron count to maximum margin in modular addition.
Identifies Fourier spectrum alignment in hidden-layer neurons.
🔎 Similar Papers
No similar papers found.