Splitwise: Collaborative Edge-Cloud Inference for LLMs via Lyapunov-Assisted DRL

๐Ÿ“… 2025-12-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address resource constraints on edge devices, high cloud inference latency, and the inflexibility of static edge-cloud partitioning under bandwidth fluctuations, this paper proposes a fine-grained, adaptive edge-cloud collaborative LLM inference framework. It enables dynamic intra-layer partitioning of attention heads and feed-forward sub-blocks within Transformer layers and introduces a Lyapunov-guided hierarchical deep reinforcement learning policy to jointly optimize latency, energy consumption, and accuracy while ensuring task queue stability. A partitioned checkpointing mechanism with exponential backoff recovery is further incorporated to enhance communication robustness. Experiments on platforms including Jetson Orin NX demonstrate that our approach reduces end-to-end latency by 1.4โ€“2.8ร—, cuts energy consumption by up to 41%, and lowers the 95th-percentile latency by 53โ€“61% compared to pure cloud executionโ€”while preserving model accuracy and maintaining bounded memory overhead.

Technology Category

Application Category

๐Ÿ“ Abstract
Deploying large language models (LLMs) on edge devices is challenging due to their limited memory and power resources. Cloud-only inference reduces device burden but introduces high latency and cost. Static edge-cloud partitions optimize a single metric and struggle when bandwidth fluctuates. We propose Splitwise, a novel Lyapunov-assisted deep reinforcement learning (DRL) framework for fine-grained, adaptive partitioning of LLMs across edge and cloud environments. Splitwise decomposes transformer layers into attention heads and feed-forward sub-blocks, exposing more partition choices than layer-wise schemes. A hierarchical DRL policy, guided by Lyapunov optimization, jointly minimizes latency, energy consumption, and accuracy degradation while guaranteeing queue stability under stochastic workloads and variable network bandwidth. Splitwise also guarantees robustness via partition checkpoints with exponential backoff recovery in case of communication failures. Experiments on Jetson Orin NX, Galaxy S23, and Raspberry Pi 5 with GPT-2 (1.5B), LLaMA-7B, and LLaMA-13B show that Splitwise reduces end-to-end latency by 1.4x-2.8x and cuts energy consumption by up to 41% compared with existing partitioners. It lowers the 95th-percentile latency by 53-61% relative to cloud-only execution, while maintaining accuracy and modest memory requirements.
Problem

Research questions and friction points this paper is trying to address.

Optimizing edge-cloud LLM inference under resource constraints and fluctuating bandwidth.
Minimizing latency, energy consumption, and accuracy loss in distributed LLM deployment.
Ensuring robust LLM partitioning with stability guarantees during network variability.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lyapunov-assisted DRL for adaptive edge-cloud partitioning
Hierarchical policy minimizing latency, energy, and accuracy loss
Exponential backoff recovery ensuring robustness in failures
๐Ÿ”Ž Similar Papers
No similar papers found.