Learning Syntax Without Planting Trees: Understanding When and Why Transformers Generalize Hierarchically

📅 2024-04-25
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the mechanisms enabling Transformers to achieve hierarchical generalization without explicit syntactic supervision, and identifies the sources of their inductive biases. Method: We employ synthetic syntactic datasets, multi-objective contrastive training (language modeling, prefix-LM, and seq2seq), structured pruning, and Bayesian model selection analysis. Contribution/Results: (1) Standard language modeling (LM) objective alone suffices to robustly induce hierarchical generalization—serving as the critical training signal. (2) Transformers internally host parallel, co-existing hierarchical and linear subnetworks. (3) The propensity for hierarchical generalization aligns closely with the Bayesian principle of “simplest grammatical explanation”: Transformers consistently exhibit hierarchical behavior precisely when hierarchical grammars minimize description length. This work establishes, for the first time, a causal chain from the LM objective → internal dual-path architecture → Bayesian-optimal syntactic explanation.

Technology Category

Application Category

📝 Abstract
Transformers trained on natural language data have been shown to learn its hierarchical structure and generalize to sentences with unseen syntactic structures without explicitly encoding any structural bias. In this work, we investigate sources of inductive bias in transformer models and their training that could cause such generalization behavior to emerge. We extensively experiment with transformer models trained on multiple synthetic datasets and with different training objectives and show that while other objectives e.g. sequence-to-sequence modeling, prefix language modeling, often failed to lead to hierarchical generalization, models trained with the language modeling objective consistently learned to generalize hierarchically. We then conduct pruning experiments to study how transformers trained with the language modeling objective encode hierarchical structure. When pruned, we find joint existence of subnetworks within the model with different generalization behaviors (subnetworks corresponding to hierarchical structure and linear order). Finally, we take a Bayesian perspective to further uncover transformers' preference for hierarchical generalization: We establish a correlation between whether transformers generalize hierarchically on a dataset and whether the simplest explanation of that dataset is provided by a hierarchical grammar compared to regular grammars exhibiting linear generalization.
Problem

Research questions and friction points this paper is trying to address.

Investigates inductive bias in transformers for hierarchical generalization.
Explores training objectives enabling hierarchical generalization in transformers.
Analyzes subnetworks in transformers for hierarchical and linear generalization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers learn hierarchical structure without explicit bias.
Language modeling objective enables hierarchical generalization.
Pruning reveals subnetworks with distinct generalization behaviors.