From Shortcut to Induction Head: How Data Diversity Shapes Algorithm Selection in Transformers

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how pretraining data distribution determines whether shallow Transformers learn inductive heads (generalizable algorithms) or positional shortcuts (overfitting behaviors) in trigger–output prediction tasks. Method: We combine gradient analysis of single-layer Transformers, infinite/finite-sample theoretical derivations, controlled synthetic-data experiments, and attention-head attribution diagnostics. Contribution/Results: We theoretically establish that the max-sum ratio of trigger spacing distribution serves as a critical phase-transition parameter governing inductive-head emergence; further, we prove an inherent trade-off between pretraining context length and out-of-distribution (OOD) generalization. We derive an optimal data-distribution design principle minimizing computational cost. Empirically, broadening the context-length distribution robustly elicits inductive heads, achieving near-perfect (≈100%) OOD generalization accuracy. This work provides the first rigorous theoretical characterization of the data-diversity threshold that governs mechanism selection—induction versus shortcut learning.

Technology Category

Application Category

📝 Abstract
Transformers can implement both generalizable algorithms (e.g., induction heads) and simple positional shortcuts (e.g., memorizing fixed output positions). In this work, we study how the choice of pretraining data distribution steers a shallow transformer toward one behavior or the other. Focusing on a minimal trigger-output prediction task -- copying the token immediately following a special trigger upon its second occurrence -- we present a rigorous analysis of gradient-based training of a single-layer transformer. In both the infinite and finite sample regimes, we prove a transition in the learned mechanism: if input sequences exhibit sufficient diversity, measured by a low ``max-sum'' ratio of trigger-to-trigger distances, the trained model implements an induction head and generalizes to unseen contexts; by contrast, when this ratio is large, the model resorts to a positional shortcut and fails to generalize out-of-distribution (OOD). We also reveal a trade-off between the pretraining context length and OOD generalization, and derive the optimal pretraining distribution that minimizes computational cost per sample. Finally, we validate our theoretical predictions with controlled synthetic experiments, demonstrating that broadening context distributions robustly induces induction heads and enables OOD generalization. Our results shed light on the algorithmic biases of pretrained transformers and offer conceptual guidelines for data-driven control of their learned behaviors.
Problem

Research questions and friction points this paper is trying to address.

Analyzes how data diversity influences transformer algorithm selection
Studies transition between induction heads and positional shortcuts in transformers
Examines trade-off between pretraining context length and OOD generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data diversity steers transformers to induction heads or shortcuts
Low max-sum ratio triggers induction heads for OOD generalization
Optimal pretraining distribution balances context length and computational cost
🔎 Similar Papers
No similar papers found.