🤖 AI Summary
To address challenges in decentralized LLM inference—including GPU heterogeneity, low-bandwidth interconnects, and dynamic resource availability—this paper proposes DecentLLM, a decentralized large language model inference system designed for volunteer computing. Its core innovation is a two-stage cooperative scheduler: the first stage jointly optimizes model sharding deployment and cross-node pipeline construction; the second enables fine-grained request scheduling and dynamic load balancing. Coupled with a lightweight communication protocol and adaptive pipeline parallelism, DecentLLM efficiently supports open-source LLMs on heterogeneous GPU pools. Experiments on a real-world volunteer node cluster demonstrate that DecentLLM reduces average latency by 38.2% and improves throughput by 2.1× over state-of-the-art decentralized baselines. These results validate volunteer computing as a viable, cost-effective, and highly elastic infrastructure for LLM inference.
📝 Abstract
Deploying a large language model (LLM) inference service remains costly because centralized serving depends on specialized GPU clusters and high-bandwidth interconnects in datacenters. An appealing alternative is to leverage collaborative decentralized GPU pools. However, heterogeneity in GPU and limited interconnected network bandwidth, along with potentially dynamic availability, make efficient scheduling the central challenge in this scenario. In this paper, we present Parallax, a decentralized LLM serving system that turns a pool of heterogeneous GPUs into an efficient inference platform via a two-phase scheduler. Parallax decomposes planning into (i) model allocation, which places layers of each replica across diverse GPUs to jointly optimize latency and throughput under memory and link-bandwidth constraints, and (ii) request-time GPU pipeline selection, which stitches layers from different replicas into end-to-end execution chains that balance load and adapt to current conditions. We implement Parallax and evaluate it on open-source LLMs deployed over real volunteer nodes. Parallax consistently reduces latency and increases throughput relative to decentralized baselines, demonstrating that principled scheduling can make volunteer compute a practical, affordable substrate for LLM inference.
Github Repo at: https://github.com/GradientHQ/parallax.