🤖 AI Summary
This work investigates the mismatch between human-like reasoning and the behavior of large language models (LLMs) in compositional skill learning, particularly in arithmetic tasks where models exhibit unreliable reasoning and poor out-of-distribution generalization. By training Transformer models on synthetic arithmetic tasks and employing fine-grained diagnostics, ablation studies, and distribution shift evaluations, the study reveals that models tend to acquire skill composition in a reversed or parallel manner rather than following the correct procedural order. The authors introduce the notion of “fragmented compositionality” to describe this phenomenon, attributing it to models’ reliance on statistical correlations in data rather than causal, programmatic reasoning. This limitation persists across modern LLMs and cannot be resolved merely by scaling up model size or incorporating scratchpad mechanisms, highlighting a fundamental misalignment between current learning paradigms and idealized reasoning structures.
📝 Abstract
Large language models (LLMs) often exhibit unexpected errors or unintended behavior, even at scale. While recent work reveals the discrepancy between LLMs and humans in skill compositions, the learning dynamics of skill compositions and the underlying cause of non-human behavior remain elusive. In this study, we investigate the mechanism of learning dynamics by training transformers on synthetic arithmetic tasks. Through extensive ablations and fine-grained diagnostic metrics, we discover that transformers do not reliably build skill compositions according to human-like sequential rules. Instead, they often acquire skills in reverse order or in parallel, which leads to unexpected mixing errors especially under distribution shifts--a phenomenon we refer to as shattered compositionality. To explain these behaviors, we provide evidence that correlational matching to the training data, rather than causal or procedural composition, shapes learning dynamics. We further show that shattered compositionality persists in modern LLMs and is not mitigated by pure model scaling or scratchpad-based reasoning. Our results reveal a fundamental mismatch between a model's learning behavior and desired skill compositions, with implications for reasoning reliability, out-of-distribution robustness, and alignment.