🤖 AI Summary
This work addresses the efficient extraction and cross-model transfer of reasoning capabilities in large language models (LLMs). We propose *Task Arithmetic*, a novel method that, for the first time, decouples complex reasoning skills—acquired via reinforcement learning—into reusable, editable parameter vectors (“reasoning vectors”), enabling zero-shot capability injection and modulation through simple vector arithmetic. Leveraging Qwen2.5, we construct these vectors by computing parameter differences between models fine-tuned via supervised fine-tuning and population-based relative policy optimization, then apply tensor arithmetic for cross-model transfer. Our approach achieves +4.9% absolute improvement on GSM8K and +12.3% on BigBenchHard, while demonstrating strong robustness to interference. These results validate the feasibility and generalizability of representing reasoning abilities as portable vector embeddings—a new paradigm for modular, compositional LLM capability engineering.
📝 Abstract
Large language models often require costly optimization, such as reinforcement learning, to master complex reasoning tasks. This work demonstrates that reasoning ability, once learned, can be extracted and transferred between models as a compact task vector. We source two publicly available, identically initialized Qwen2.5 models, one fine-tuned with supervised fine-tuning (SFT) and the other with group relative policy optimization (GRPO) on the same dataset. From these, we extract a reasoning vector: $v_{ ext{reason}} = θ_{ ext{GRPO}} - θ_{ ext{SFT}}$. We hypothesize that this vector captures the reasoning capability instilled by reinforcement learning while factoring out shared knowledge from the SFT process. When added to compatible instruction-tuned models through simple arithmetic, this vector consistently improves performance across diverse reasoning benchmarks: GSM8K (+4.9%), HumanEval (+4.3%), SciQ (+1.7%), and BigBenchHard (+12.3% for the 1.5B model). The performance improvements persist under adversarial conditions. Conversely, subtracting the vector causes significant performance degradation (-11.8% on GSM8K), demonstrating the vector's strong contribution to the model's reasoning abilities. This work shows how reasoning capabilities, typically developed through expensive training, can be extracted from existing open-source models and reused through simple tensor arithmetic, offering a practical way to enhance models by recycling prior computational investments.