Chain of Thought in Order: Discovering Learning-Friendly Orders for Arithmetic

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how the ordering of reasoning steps in chain-of-thought (CoT) prompting affects Transformer models’ learnability in arithmetic tasks, addressing the training instability caused by fixed, arbitrary step sequences. Method: We propose a two-stage hierarchical reordering framework: (1) dynamically filtering promising step orders based on early-training loss, and (2) efficiently searching among billions of candidates via coordinated inter-block and intra-block optimization. Contribution/Results: This is the first systematic study to reveal the critical impact of CoT step ordering on learning efficiency. We introduce multi-order mixed training and a loss-driven order identification mechanism. Evaluated on four order-sensitive arithmetic tasks, our method significantly improves convergence speed and generalization. Notably, on multiplication tasks, it automatically rediscoveries the empirically optimal reverse-digit ordering pattern—demonstrating both interpretability and effectiveness.

Technology Category

Application Category

📝 Abstract
The chain of thought is fundamental in Transformers, which is to perform step-by-step reasoning. Besides what intermediate steps work, the order of these steps critically affects the difficulty of the reasoning. This study addresses a novel task of unraveling chain of thought - reordering decoder input tokens to a learning-friendly sequence for Transformers to learn arithmetic tasks. The proposed pipeline first trains a Transformer on a mixture of target sequences arranged in different orders and then identifies benign orders as those with fast loss drops in the early stage. As the search space grows factorially with sequence length, we propose a two-stage hierarchical approach for inter- and intra-block reordering. Experiments on four order-sensitive arithmetic tasks show that our method identifies a learning-friendly order out of a few billion candidates. Notably, on the multiplication task, it recovered the reverse-digit order reported in prior studies.
Problem

Research questions and friction points this paper is trying to address.

Discovering optimal token orders for Transformers' reasoning
Reducing learning difficulty via step sequence reordering
Solving factorial search space in arithmetic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reorders decoder tokens for learning-friendly sequences
Uses hierarchical approach for inter-intra block reordering
Identifies optimal orders via early loss drop analysis
🔎 Similar Papers
No similar papers found.
Y
Yuta Sato
Chiba University
K
Kazuhiko Kawamoto
Chiba University
Hiroshi Kera
Hiroshi Kera
Chiba University
Approximate Computer AlgebraAdversarial Machine LearningMath Transformer