🤖 AI Summary
To address the performance limitations of large language models (LLMs) on domain-specific tasks due to insufficient proprietary knowledge, this paper proposes a weak–strong collaborative reasoning framework: a lightweight, domain-specialized “weak” model generates initial drafts and domain background knowledge, while a powerful general-purpose LLM performs high-level reasoning and refinement. We introduce a novel collaborative feedback mechanism that quantifies the weak model’s contributions and constructs preference pairs, enabling preference alignment via DPO or RLHF—thereby departing from conventional single-model fine-tuning paradigms. Experiments across three professional domains demonstrate that the collaborative framework significantly outperforms single-model baselines; further preference alignment yields consistent performance gains, validating the complementarity, scalability, and efficacy of the proposed approach.
📝 Abstract
Current Large Language Models (LLMs) excel in general reasoning yet struggle with specialized tasks requiring proprietary or domain-specific knowledge. Fine-tuning large models for every niche application is often infeasible due to black-box constraints and high computational overhead. To address this, we propose a collaborative framework that pairs a specialized weak model with a general strong model. The weak model, tailored to specific domains, produces initial drafts and background information, while the strong model leverages its advanced reasoning to refine these drafts, extending LLMs' capabilities to critical yet specialized tasks. To optimize this collaboration, we introduce a collaborative feedback to fine-tunes the weak model, which quantifies the influence of the weak model's contributions in the collaboration procedure and establishes preference pairs to guide preference tuning of the weak model. We validate our framework through experiments on three domains. We find that the collaboration significantly outperforms each model alone by leveraging complementary strengths. Moreover, aligning the weak model with the collaborative preference further enhances overall performance.