๐ค AI Summary
To address resource constraints on edge devices, high cloud inference latency, and the inflexibility of static edge-cloud partitioning under bandwidth fluctuations, this paper proposes a fine-grained, adaptive edge-cloud collaborative LLM inference framework. It enables dynamic intra-layer partitioning of attention heads and feed-forward sub-blocks within Transformer layers and introduces a Lyapunov-guided hierarchical deep reinforcement learning policy to jointly optimize latency, energy consumption, and accuracy while ensuring task queue stability. A partitioned checkpointing mechanism with exponential backoff recovery is further incorporated to enhance communication robustness. Experiments on platforms including Jetson Orin NX demonstrate that our approach reduces end-to-end latency by 1.4โ2.8ร, cuts energy consumption by up to 41%, and lowers the 95th-percentile latency by 53โ61% compared to pure cloud executionโwhile preserving model accuracy and maintaining bounded memory overhead.
๐ Abstract
Deploying large language models (LLMs) on edge devices is challenging due to their limited memory and power resources. Cloud-only inference reduces device burden but introduces high latency and cost. Static edge-cloud partitions optimize a single metric and struggle when bandwidth fluctuates. We propose Splitwise, a novel Lyapunov-assisted deep reinforcement learning (DRL) framework for fine-grained, adaptive partitioning of LLMs across edge and cloud environments. Splitwise decomposes transformer layers into attention heads and feed-forward sub-blocks, exposing more partition choices than layer-wise schemes. A hierarchical DRL policy, guided by Lyapunov optimization, jointly minimizes latency, energy consumption, and accuracy degradation while guaranteeing queue stability under stochastic workloads and variable network bandwidth. Splitwise also guarantees robustness via partition checkpoints with exponential backoff recovery in case of communication failures. Experiments on Jetson Orin NX, Galaxy S23, and Raspberry Pi 5 with GPT-2 (1.5B), LLaMA-7B, and LLaMA-13B show that Splitwise reduces end-to-end latency by 1.4x-2.8x and cuts energy consumption by up to 41% compared with existing partitioners. It lowers the 95th-percentile latency by 53-61% relative to cloud-only execution, while maintaining accuracy and modest memory requirements.