🤖 AI Summary
This study investigates the minimal parameter count required for small Transformer models to learn nested mathematical operations (e.g., SUM, MAX, MED) in the ListOps benchmark, and how this relates to intrinsic task difficulty. Methodologically, it employs progressive difficulty scaling, multi-task joint training, and systematic ablation studies, complemented by embedding visualizations and module-wise activation tracking. Results show that multi-task training substantially lowers the learning threshold for challenging tasks like SUM: models too small to learn SUM in isolation—falling below the single-task capacity threshold—acquire robust, generalizable SUM capability after multi-task pretraining. Crucially, this work provides the first evidence that task composition induces *number sense* representations: models develop numerically structured embeddings, exhibit strong parity discrimination, and rely more heavily on attention mechanisms. These findings offer a novel perspective on the emergence of mathematical reasoning capabilities in resource-constrained neural models.
📝 Abstract
The ability of a model to learn a task depends strongly on both the task difficulty and the model size. We aim to understand how task difficulty relates to the minimum number of parameters required for learning specific tasks in small transformer models. Our study focuses on the ListOps dataset, which consists of nested mathematical operations. We gradually increase task difficulty by introducing new operations or combinations of operations into the training data. We observe that sum modulo n is the hardest to learn. Curiously, when combined with other operations such as maximum and median, the sum operation becomes easier to learn and requires fewer parameters. We show that joint training not only improves performance but also leads to qualitatively different model behavior. We show evidence that models trained only on SUM might be memorizing and fail to capture the number structure in the embeddings. In contrast, models trained on a mixture of SUM and other operations exhibit number-like representations in the embedding space, and a strong ability to distinguish parity. Furthermore, the SUM-only model relies more heavily on its feedforward layers, while the jointly trained model activates the attention mechanism more. Finally, we show that learning pure SUM can be induced in models below the learning threshold of pure SUM, by pretraining them on MAX+MED. Our findings indicate that emergent abilities in language models depend not only on model size, but also the training curriculum.