It Helps to Take a Second Opinion: Teaching Smaller LLMs to Deliberate Mutually via Selective Rationale Optimisation

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of training small language models (SLMs, <13B) in commercial settings—where large language models (LLMs) cannot be used for distillation due to API costs, copyright, and compliance constraints—this paper proposes COALITION: a dual-SLM collaborative deliberation framework that operates entirely without LLM involvement. Its core comprises two behaviorally heterogeneous yet parameter-shared SLM variants, which jointly generate candidate rationales via cross-step reasoning. These variants perform mutual evaluation and self-reflection through selective rationale optimization (SRO) and dynamic controller scheduling. COALITION is the first method to enable trainable self-assessment and reasoning enhancement within a purely SLM-based paradigm. Experiments demonstrate an average 5.0% improvement across five complex reasoning tasks, with compatibility across major 4B–14B model families—including Llama, Mistral, Qwen, and Phi. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Very large language models (LLMs) such as GPT-4 have shown the ability to handle complex tasks by generating and self-refining step-by-step rationales. Smaller language models (SLMs), typically with<13B parameters, have been improved by using the data generated from very-large LMs through knowledge distillation. However, various practical constraints such as API costs, copyright, legal and ethical policies restrict using large (often opaque) models to train smaller models for commercial use. Limited success has been achieved at improving the ability of an SLM to explore the space of possible rationales and evaluate them by itself through self-deliberation. To address this, we propose COALITION, a trainable framework that facilitates interaction between two variants of the same SLM and trains them to generate and refine rationales optimized for the end-task. The variants exhibit different behaviors to produce a set of diverse candidate rationales during the generation and refinement steps. The model is then trained via Selective Rationale Optimization (SRO) to prefer generating rationale candidates that maximize the likelihood of producing the ground-truth answer. During inference, COALITION employs a controller to select the suitable variant for generating and refining the rationales. On five different datasets covering mathematical problems, commonsense reasoning, and natural language inference, COALITION outperforms several baselines by up to 5%. Our ablation studies reveal that cross-communication between the two variants performs better than using the single model to self-refine the rationales. We also demonstrate the applicability of COALITION for LMs of varying scales (4B to 14B parameters) and model families (Mistral, Llama, Qwen, Phi). We release the code for this work at https://github.com/Sohanpatnaik106/coalition.
Problem

Research questions and friction points this paper is trying to address.

Improving smaller language models' self-deliberation and rationale generation.
Reducing reliance on large models for training smaller commercial models.
Enhancing rationale diversity and optimization for better task performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

COALITION framework trains SLMs to deliberate mutually.
Selective Rationale Optimization maximizes ground-truth likelihood.
Controller selects optimal variant for rationale refinement.