🤖 AI Summary
This work addresses the inefficiency of knowledge transfer in federated learning when large and small language models collaborate, which stems from disparities in learnability and domain-agnostic reasoning transfer. To tackle this, the authors propose LaDa, a novel framework that introduces, for the first time, a learnability-aware mechanism to dynamically filter high-return samples and employs contrastive distillation to align the joint reasoning-path probabilities of large and small models. This enables adaptive reasoning transfer tailored to local data distributions. LaDa features a plug-in federated distillation architecture that facilitates efficient and lightweight deployment. Experimental results demonstrate that LaDa significantly enhances the step-by-step reasoning capability of small models on local tasks and effectively bridges the bidirectional knowledge gap.
📝 Abstract
Data allocation plays a critical role in federated large language model (LLM) and small language models (SLMs) reasoning collaboration. Nevertheless, existing data allocation methods fail to address an under-explored challenge in collaboration: bidirectional model learnability gap, where client-side SLMs cannot identify high-reward samples matching their learnability constraints for effective knowledge transfer from LLMs, while LLMs struggle to select samples contributing novel knowledge beyond their existing data. Furthermore, these collaboration frameworks face another key challenge: domain-agnostic reasoning transfer, where existing reasoning transfer methods fail to flexibly adapt to the local domain data, preventing SLMs from effectively acquiring step-by-step reasoning abilities within from general LLM. To address these challenges, we propose LaDa, a federated reasoning distillation framework with model learnability-aware data allocation. It introduces a model learnability-aware data filter that adaptively allocates high-reward samples based on the learnability gap between each SLM and LLM pair, effectively facilitating bidirectional knowledge transfer. We further design a domain adaptive reasoning distillation method that aligns joint probabilities of reasoning paths on filtered high-reward samples through contrastive distillation learning between SLM and LLM, enabling SLM to capture underlying reasoning patterns under local data distribution. LaDa operates as a plug-in module for existing collaboration frameworks, adapting knowledge transfer based on model learnability gaps.