LLM-Driven Intent-Based Privacy-Aware Orchestration Across the Cloud-Edge Continuum

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of elastic deployment for large language model (LLM) inference in resource-constrained heterogeneous GPU environments. It proposes a state-preserving dynamic pipeline reconfiguration method that enables online adjustment of inference configurations in response to varying workloads while maintaining service continuity. By leveraging lightweight model state migration and dynamic scheduling, the approach achieves low-overhead online reconfiguration of LLM inference pipelines for the first time. Evaluated on heterogeneous GPU platforms such as A100 and L40s, the method limits service interruption to under 50 ms and incurs less than 10% performance overhead in both Time-to-First-Token (TTFT) and Time-per-Output-Token (TPOT), significantly improving resource utilization and response efficiency.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of large language models (LLMs), efficiently serving LLM inference under limited GPU resources has become a critical challenge. Recently, an increasing number of studies have explored applying serverless computing paradigms to LLM serving in order to maximize resource utilization. However, LLM inference workloads are highly diverse, and modern GPU clusters are inherently heterogeneous, making it necessary to dynamically adjust deployment configurations online to better adapt to the elastic and dynamic nature of serverless environments. At the same time, enabling such online reconfiguration is particularly challenging due to the stateful nature of LLM inference and the massive size of model parameters. In this paper, we propose a dynamic pipeline reconfiguration approach that enables online adjustment of pipeline configurations while minimizing service downtime and performance degradation. Our method allows the system to select the optimal pipeline configuration in response to changing workloads. Experimental results on heterogeneous GPU platforms, including NVIDIA A100 and L40s, demonstrate that our migration mechanism incurs less than 50 ms of service downtime, while introducing under 10% overhead on both time-to-first-token (TTFT) and time-per-output-token (TPOT).
Problem

Research questions and friction points this paper is trying to address.

LLM inference
serverless computing
dynamic reconfiguration
heterogeneous GPU
cloud-edge continuum
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic pipeline reconfiguration
LLM inference
serverless computing
heterogeneous GPUs
online reconfiguration
🔎 Similar Papers
No similar papers found.