Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures

📅 2025-05-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how neural language models acquire hierarchical syntactic structure from sequential data. Method: Using synthetic language data generated by a tractable Random Hierarchical Model (RHM), the authors systematically compare scaling behaviors of Transformers and convolutional networks on next-token prediction. Theoretical analysis is complemented by empirical evaluation. Contribution/Results: The study provides the first theoretical and empirical evidence that convolutional networks—due to their local receptive fields and weight sharing—are inherently better aligned with hierarchical generative processes, achieving faster error decay than globally attending Transformers. Their scaling exponent is significantly smaller, demonstrating a critical coupling between architectural inductive bias and the statistical properties of hierarchical data. This work establishes an architecture-dependent scaling theory for representation learning, offering a new paradigm for analyzing model inductive biases and structural generalization capabilities.

Technology Category

Application Category

📝 Abstract
How do neural language models acquire a language's structure when trained for next-token prediction? We address this question by deriving theoretical scaling laws for neural network performance on synthetic datasets generated by the Random Hierarchy Model (RHM) -- an ensemble of probabilistic context-free grammars designed to capture the hierarchical structure of natural language while remaining analytically tractable. Previously, we developed a theory of representation learning based on data correlations that explains how deep learning models capture the hierarchical structure of the data sequentially, one layer at a time. Here, we extend our theoretical framework to account for architectural differences. In particular, we predict and empirically validate that convolutional networks, whose structure aligns with that of the generative process through locality and weight sharing, enjoy a faster scaling of performance compared to transformer models, which rely on global self-attention mechanisms. This finding clarifies the architectural biases underlying neural scaling laws and highlights how representation learning is shaped by the interaction between model architecture and the statistical properties of data.
Problem

Research questions and friction points this paper is trying to address.

How neural language models learn hierarchical structure via next-token prediction
Comparing performance scaling of transformers vs. convolutional architectures on synthetic hierarchical data
Theoretical framework linking model architecture biases to representation learning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theoretical scaling laws for neural network performance
Convolutional networks align with generative process structure
Transformers rely on global self-attention mechanisms
🔎 Similar Papers
No similar papers found.
F
Francesco Cagnetta
Scuola Internazionale Superiore di Studi Avanzati (SISSA), Via Bonomea 265, 34136 Trieste, Italy
A
Alessandro Favero
Institute of Physics, École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
A
Antonio Sclocchi
Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
Matthieu Wyart
Matthieu Wyart
Professor of Physics, Johns Hopkins
physics