Amplify Adjacent Token Differences: Enhancing Long Chain-of-Thought Reasoning with Shift-FFN

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In long chain-of-thought (Long-CoT) reasoning, large language models frequently suffer from cyclic reasoning—repetitively generating semantically similar steps until sequence truncation—strongly correlated with insufficient representational divergence between adjacent tokens. Method: We propose Shift-FFN, a novel feed-forward network architecture that dynamically edits the current token’s representation using the preceding token’s embedding prior to FFN processing, thereby explicitly modeling inter-token representational divergence as a core mechanism to mitigate cycling. Integrated with LoRA-based fine-tuning and long-reasoning-aware training, Shift-FFN enhances reasoning continuity without increasing inference latency. Contribution/Results: On multiple mathematical reasoning benchmarks, Shift-FFN significantly reduces cyclic reasoning rates and achieves higher accuracy than both full-parameter fine-tuning and standard LoRA under identical data budgets. This work establishes the critical role of local representational differentiation in sustaining coherent, stepwise reasoning and introduces a parameter-efficient, robust paradigm for long-chain inference.

Technology Category

Application Category

📝 Abstract
Recently, models such as OpenAI-o1 and DeepSeek-R1 have demonstrated remarkable performance on complex reasoning tasks through Long Chain-of-Thought (Long-CoT) reasoning. Although distilling this capability into student models significantly enhances their performance, this paper finds that fine-tuning LLMs with full parameters or LoRA with a low rank on long CoT data often leads to Cyclical Reasoning, where models repeatedly reiterate previous inference steps until the maximum length limit. Further analysis reveals that smaller differences in representations between adjacent tokens correlates with a higher tendency toward Cyclical Reasoning. To mitigate this issue, this paper proposes Shift Feedforward Networks (Shift-FFN), a novel approach that edits the current token's representation with the previous one before inputting it to FFN. This architecture dynamically amplifies the representation differences between adjacent tokens. Extensive experiments on multiple mathematical reasoning tasks demonstrate that LoRA combined with Shift-FFN achieves higher accuracy and a lower rate of Cyclical Reasoning across various data sizes compared to full fine-tuning and standard LoRA. Our data and code are available at https://anonymous.4open.science/r/Shift-FFN
Problem

Research questions and friction points this paper is trying to address.

Mitigating Cyclical Reasoning in fine-tuned LLMs
Enhancing token representation differences in Long-CoT
Improving accuracy in mathematical reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shift-FFN amplifies adjacent token differences
LoRA combined with Shift-FFN reduces Cyclical Reasoning
Dynamic representation editing enhances reasoning accuracy
🔎 Similar Papers
No similar papers found.
Y
Yao Xu
The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Beijing Academy of Artificial Intelligence, Beijing, China
Mingyu Xu
Mingyu Xu
Bytedance
large language modelmachine learning
Fangyu Lei
Fangyu Lei
Institute of Automation, Chinese Academy of Sciences
LLM-AgentCode GenerationText-to-SQLTable Reasoning
W
Wangtao Sun
The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
X
Xiangrong Zeng
Baichuan Inc, Beijing, China
Bingning Wang
Bingning Wang
Baichuan Inc.
NLPQuestion AnsweringLarge language model
Guang Liu
Guang Liu
BAAI
AI,LLMData
S
Shizhu He
The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
J
Jun Zhao
The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
K
Kang Liu
The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China