🤖 AI Summary
To address the coexisting requirements of ultra-low latency, high reliability, and strong privacy in 6G intelligent services, centralized large language model (LLM) inference fails to simultaneously ensure edge responsiveness and data security. This paper proposes a cloud-edge collaborative inference architecture integrating large and small language models. We introduce two key innovations: (1) cross-tier KV cache reuse and an asynchronous state loading-decoding scheduling mechanism; and (2) heterogeneous model layer alignment coupled with semantic representation compression, jointly enhancing comprehension capability while reducing communication overhead. The approach guarantees that end-device data remains within its local domain, significantly lowering edge computational and storage loads. Experimental results show a 37.2% reduction in inference latency, a 2.1× improvement in concurrent throughput, and system stability of 99.99%. Our framework establishes a scalable paradigm for semantic communication–computation convergence in 6G networks.
📝 Abstract
Emerging intelligent service scenarios in 6G communication impose stringent requirements for low latency, high reliability, and privacy preservation. Generative large language models (LLMs) are gradually becoming key enablers for the integration of semantic communication and computation. However, due to the limited computational resources of edge devices and the increasing complexity of heterogeneous terminal access, existing centralized inference approaches fail to meet the dual demands of response efficiency and data privacy in edge-side inference tasks. To address these challenges, this paper proposes a novel collaborative inference architecture that integrates cloud-based LLMs with edge-deployed small language models (SLMs), enabling dynamic scheduling and sharing of semantic-level intermediate states, and establishing a unified computation-communication paradigm tailored for 6G networks. Specifically, a key-value (KV) cache reuse mechanism is introduced to enhance the semantic understanding of edge models through contextual guidance from the cloud, while significantly reducing edge-side computational and storage overhead. Furthermore, a cross-node parallel scheduling mechanism is proposed to achieve asynchronous coordination between model state loading and decoding computation, thereby improving edge responsiveness. In addition, we investigate layer alignment and representation compression strategies between heterogeneous models to alleviate the communication burden on the edge. Experimental results demonstrate that the proposed architecture exhibits superior adaptability and scalability in terms of inference latency, system stability, and concurrent processing capacity.