🤖 AI Summary
This work addresses the challenge of enabling multiple frozen, small-scale specialized language models to collaborate effectively on complex reasoning tasks without relying on a large monolithic language model. The authors propose a trainable soft attention interface that fuses heterogeneous expert models at the hidden state level and employs a reinforcement learning with verifiable rewards (RLVR) mechanism to guide their cooperative reasoning. The approach reveals a dual-mode expert utilization strategy that evolves with task difficulty: static expert preferences dominate for simple tasks, while complex tasks elicit dynamic, structured attention patterns and emergent specialization among experts. Evaluated on Reasoning Gym and GSM8K, the method matches the performance of strong single-model RLVR baselines while offering unprecedented observability into the evolution of collaborative reasoning dynamics.
📝 Abstract
Recent progress in reinforcement learning with verifiable rewards (RLVR) shows that small, specialized language models (SLMs) can exhibit structured reasoning without relying on large monolithic LLMs. We introduce soft hidden-state collaboration, where multiple heterogeneous frozen SLM experts are integrated through their internal representations via a trainable attention interface. Experiments on Reasoning Gym and GSM8K show that this latent integration is competitive with strong single-model RLVR baselines. Ablations further reveal a dual mechanism of expert utilization: for simpler arithmetic domains, performance gains can largely be explained by static expert preferences, whereas more challenging settings induce increasingly concentrated and structured expert attention over training, indicating emergent specialization in how the router connects to relevant experts. Overall, hidden-state collaboration provides a compact mechanism for leveraging frozen experts, while offering an observational window into expert utilization patterns and their evolution under RLVR.