🤖 AI Summary
In long chain-of-thought (Long-CoT) reasoning, large language models frequently suffer from cyclic reasoning—repetitively generating semantically similar steps until sequence truncation—strongly correlated with insufficient representational divergence between adjacent tokens.
Method: We propose Shift-FFN, a novel feed-forward network architecture that dynamically edits the current token’s representation using the preceding token’s embedding prior to FFN processing, thereby explicitly modeling inter-token representational divergence as a core mechanism to mitigate cycling. Integrated with LoRA-based fine-tuning and long-reasoning-aware training, Shift-FFN enhances reasoning continuity without increasing inference latency.
Contribution/Results: On multiple mathematical reasoning benchmarks, Shift-FFN significantly reduces cyclic reasoning rates and achieves higher accuracy than both full-parameter fine-tuning and standard LoRA under identical data budgets. This work establishes the critical role of local representational differentiation in sustaining coherent, stepwise reasoning and introduces a parameter-efficient, robust paradigm for long-chain inference.
📝 Abstract
Recently, models such as OpenAI-o1 and DeepSeek-R1 have demonstrated remarkable performance on complex reasoning tasks through Long Chain-of-Thought (Long-CoT) reasoning. Although distilling this capability into student models significantly enhances their performance, this paper finds that fine-tuning LLMs with full parameters or LoRA with a low rank on long CoT data often leads to Cyclical Reasoning, where models repeatedly reiterate previous inference steps until the maximum length limit. Further analysis reveals that smaller differences in representations between adjacent tokens correlates with a higher tendency toward Cyclical Reasoning. To mitigate this issue, this paper proposes Shift Feedforward Networks (Shift-FFN), a novel approach that edits the current token's representation with the previous one before inputting it to FFN. This architecture dynamically amplifies the representation differences between adjacent tokens. Extensive experiments on multiple mathematical reasoning tasks demonstrate that LoRA combined with Shift-FFN achieves higher accuracy and a lower rate of Cyclical Reasoning across various data sizes compared to full fine-tuning and standard LoRA. Our data and code are available at https://anonymous.4open.science/r/Shift-FFN