A Structure-Agnostic Co-Tuning Framework for LLMs and SLMs in Cloud-Edge Systems

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In cloud-edge collaborative systems, structural heterogeneity between large language models (LLMs) and small language models (SLMs) impedes joint training, while simultaneously ensuring privacy preservation and domain adaptation remains challenging. To address this, we propose a structure-agnostic joint fine-tuning framework. Its core innovation is a lightweight distillation proxy model that serves as a knowledge relay across devices and domains, enabling structure-independent knowledge distillation and mutual learning. This design eliminates raw data transmission—thereby preserving privacy—while retaining each device’s model-specific domain expertise. Experiments demonstrate that our method achieves state-of-the-art performance, improving Rouge-L and Exact Match (EM) scores by 5.38% and 4.88%, respectively, significantly enhancing collaborative inference efficacy among heterogeneous models.

Technology Category

Application Category

📝 Abstract
The surge in intelligent applications driven by large language models (LLMs) has made it increasingly difficult for bandwidth-limited cloud servers to process extensive LLM workloads in real time without compromising user data privacy. To solve these problems, recent research has focused on constructing cloud-edge consortia that integrate server-based LLM with small language models (SLMs) on mobile edge devices. Furthermore, designing collaborative training mechanisms within such consortia to enhance inference performance has emerged as a promising research direction. However, the cross-domain deployment of SLMs, coupled with structural heterogeneity in SLMs architectures, poses significant challenges to enhancing model performance. To this end, we propose Co-PLMs, a novel co-tuning framework for collaborative training of large and small language models, which integrates the process of structure-agnostic mutual learning to realize knowledge exchange between the heterogeneous language models. This framework employs distilled proxy models (DPMs) as bridges to enable collaborative training between the heterogeneous server-based LLM and on-device SLMs, while preserving the domain-specific insights of each device. The experimental results show that Co-PLMs outperform state-of-the-art methods, achieving average increases of 5.38% in Rouge-L and 4.88% in EM.
Problem

Research questions and friction points this paper is trying to address.

Enabling real-time LLM processing while preserving user data privacy
Overcoming structural heterogeneity in cloud-edge SLM deployments
Facilitating knowledge exchange between heterogeneous large and small language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-agnostic co-tuning for heterogeneous language models
Distilled proxy models bridge cloud-edge collaborative training
Preserves domain-specific insights while enabling knowledge exchange