Enhancing Chain-of-Thought Reasoning with Critical Representation Fine-tuning

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Chain-of-thought (CoT)-based representation fine-tuning (ReFT) underperforms on complex reasoning tasks due to two key limitations: (i) fixed-position representations exert uncertain influence on final outputs, and (ii) freezing the backbone model impedes efficient optimization. Method: We propose Critical Representation Fine-Tuning (CRFT), the first method to identify and localize *critical representations*—those determinative for final outputs—within reasoning chains. CRFT dynamically locates these representations via information-flow analysis and performs supervised, low-rank linear subspace optimization over them while keeping the backbone frozen. Results: Evaluated on LLaMA and Mistral, CRFT enables efficient, lightweight adaptation, achieving an average 16.4% improvement in one-shot accuracy across eight arithmetic and commonsense reasoning benchmarks. It significantly outperforms conventional PEFT methods, demonstrating both strong generalization and computational efficiency.

Technology Category

Application Category

📝 Abstract
Representation Fine-tuning (ReFT), a recently proposed Parameter-Efficient Fine-Tuning (PEFT) method, has attracted widespread attention for significantly improving parameter efficiency by editing representation space alone. In this work, we investigate applying ReFT to complex reasoning tasks. However, directly using the native ReFT method, which modifies fixed representations at the beginning and end of each layer, yields suboptimal performance, as these fixed-position representations have uncertain impact on the outputs. We observe that, in complex reasoning tasks, there often exist certain critical representations. These representations either integrate significant information from preceding layers or regulate subsequent layer representations. Through layer-by-layer propagation, they exert a substantial influence on the final output. Naturally, fine-tuning these critical representations has the potential to greatly enhance reasoning performance. Building upon these insights, we propose Critical Representation Fine-Tuning (CRFT), a novel method that identifies and optimizes these critical representations through information flow analysis. CRFT operates within a supervised learning framework, dynamically optimizing critical representations in a low-rank linear subspace while freezing the base model. The effectiveness and efficiency of our method are validated across eight benchmarks for arithmetic and commonsense reasoning, using LLaMA and Mistral model families. Furthermore, our method also adapts effectively to few-shot settings, boosting one-shot accuracy by 16.4%. Our work highlights the untapped potential of representation-level optimization for CoT reasoning, offering a lightweight yet powerful alternative to traditional PEFT methods.
Problem

Research questions and friction points this paper is trying to address.

Improves reasoning tasks by fine-tuning critical representations
Identifies key representations via information flow analysis
Enhances few-shot learning with dynamic low-rank optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies critical representations via information flow
Optimizes critical representations in low-rank subspace
Enhances reasoning with lightweight representation fine-tuning