Training Small Reasoning LLMs with Cognitive Preference Alignment

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small-language models (SLMs) suffer from weak reasoning capabilities, while direct distillation of chain-of-thought (CoT) from large language models (LLMs) fails due to misalignment with SLMs’ inherent cognitive constraints—and manual CoT annotation remains prohibitively expensive. Method: We propose the Critique-Rethink-Verify multi-agent framework coupled with Cognitive Preference Optimization (CogPO), the first approach to explicitly align reasoning processes with the intrinsic cognitive capacities of SLMs. It leverages multi-LLM collaborative critique, cognition-aware CoT reconstruction, preference-based optimization, and verification-driven iterative refinement for efficient knowledge transfer. Results: On challenging reasoning benchmarks—GSM8K, MMLU, and BBH—our 1.3B-parameter model significantly outperforms prior SLM methods and approaches the performance of 7B-parameter LLMs, achieving over 5× improvement in parameter efficiency. This establishes a novel paradigm for lightweight, high-fidelity reasoning.

Technology Category

Application Category

📝 Abstract
The reasoning capabilities of large language models (LLMs), such as OpenAI's o1 and DeepSeek-R1, have seen substantial advancements through deep thinking. However, these enhancements come with significant resource demands, underscoring the need to explore strategies to train effective reasoning LLMs with far fewer parameters. A critical challenge is that smaller models have different capacities and cognitive trajectories than their larger counterparts. Hence, direct distillation of chain-of-thought (CoT) results from large LLMs to smaller ones can be sometimes ineffective and requires a huge amount of annotated data. In this paper, we introduce a novel framework called Critique-Rethink-Verify (CRV), designed for training smaller yet powerful reasoning LLMs. Our CRV framework consists of multiple LLM agents, each specializing in unique abilities: (i) critiquing the CoTs according to the cognitive capabilities of smaller models, (ii) rethinking and refining these CoTs based on the critiques, and (iii) verifying the correctness of the refined results. We further propose the cognitive preference optimization (CogPO) algorithm to enhance the reasoning abilities of smaller models by aligning thoughts of these models with their cognitive capacities. Comprehensive evaluations on challenging reasoning benchmarks demonstrate the efficacy of CRV and CogPO, which outperforms other training methods by a large margin.
Problem

Research questions and friction points this paper is trying to address.

Training small reasoning LLMs efficiently with fewer parameters
Aligning cognitive preferences of small models with their capacities
Improving reasoning without direct distillation from large LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

CRV framework trains small reasoning LLMs
CogPO aligns thoughts with cognitive capacities
Multi-agent system critiques and verifies CoTs
🔎 Similar Papers
No similar papers found.
Wenrui Cai
Wenrui Cai
State Key Laboratory of Virtual Reality Technology and System, Beihang University
Computer VisionVideo AnalysisLLMs
Chengyu Wang
Chengyu Wang
Alibaba Group
Natural Language ProcessingLarge Language ModelMulti-modal Learning
J
Junbing Yan
Alibaba Cloud Computing, Hangzhou, China
J
Jun Huang
Alibaba Cloud Computing, Hangzhou, China
X
Xiangzhong Fang
Shanghai Jiao Tong University, Shanghai, China