Parallax: Efficient LLM Inference Service over Decentralized Environment

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in decentralized LLM inference—including GPU heterogeneity, low-bandwidth interconnects, and dynamic resource availability—this paper proposes DecentLLM, a decentralized large language model inference system designed for volunteer computing. Its core innovation is a two-stage cooperative scheduler: the first stage jointly optimizes model sharding deployment and cross-node pipeline construction; the second enables fine-grained request scheduling and dynamic load balancing. Coupled with a lightweight communication protocol and adaptive pipeline parallelism, DecentLLM efficiently supports open-source LLMs on heterogeneous GPU pools. Experiments on a real-world volunteer node cluster demonstrate that DecentLLM reduces average latency by 38.2% and improves throughput by 2.1× over state-of-the-art decentralized baselines. These results validate volunteer computing as a viable, cost-effective, and highly elastic infrastructure for LLM inference.

Technology Category

Application Category

📝 Abstract
Deploying a large language model (LLM) inference service remains costly because centralized serving depends on specialized GPU clusters and high-bandwidth interconnects in datacenters. An appealing alternative is to leverage collaborative decentralized GPU pools. However, heterogeneity in GPU and limited interconnected network bandwidth, along with potentially dynamic availability, make efficient scheduling the central challenge in this scenario. In this paper, we present Parallax, a decentralized LLM serving system that turns a pool of heterogeneous GPUs into an efficient inference platform via a two-phase scheduler. Parallax decomposes planning into (i) model allocation, which places layers of each replica across diverse GPUs to jointly optimize latency and throughput under memory and link-bandwidth constraints, and (ii) request-time GPU pipeline selection, which stitches layers from different replicas into end-to-end execution chains that balance load and adapt to current conditions. We implement Parallax and evaluate it on open-source LLMs deployed over real volunteer nodes. Parallax consistently reduces latency and increases throughput relative to decentralized baselines, demonstrating that principled scheduling can make volunteer compute a practical, affordable substrate for LLM inference. Github Repo at: https://github.com/GradientHQ/parallax.
Problem

Research questions and friction points this paper is trying to address.

Optimizes LLM inference in decentralized GPU environments
Addresses GPU heterogeneity and network bandwidth limitations
Schedules model allocation and request execution efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses decentralized heterogeneous GPU pools for inference
Implements two-phase scheduler for model allocation
Dynamically stitches layers across replicas for execution
🔎 Similar Papers
No similar papers found.
C
Chris Tong
Gradient
Y
Youhe Jiang
Gradient, HKUST
G
Gufeng Chen
Gradient
Tianyi Zhao
Tianyi Zhao
University of Virginia
S
Sibian Lu
Gradient
Wenjie Qu
Wenjie Qu
National University of Singapore
Applied CryptographyLLM Security
Eric Yang
Eric Yang
AI Scientist, Verily Life Sciences
L
Lynn Ai
Gradient
B
Binhang Yuan
HKUST