Palladium: A DPU-enabled Multi-Tenant Serverless Cloud over Zero-copy Multi-node RDMA Fabrics

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Serverless computing suffers from CPU-intensive data-plane bottlenecks and multi-tenant network resource contention during cross-node scaling. This paper introduces a DPU-native data plane that offloads network processing via zero-copy RDMA communication, CPU–DPU shared-memory coordination, and early HTTP/TCP-to-RDMA protocol translation. To accommodate DPU computational constraints, we design a novel two-stage zero-copy data path. Furthermore, we build the first multi-tenant DPU-enabled network engine (DNE), supporting fine-grained RDMA resource isolation and fair scheduling across tenants. Experimental evaluation demonstrates a 20.9× improvement in request throughput, a 21× reduction in tail latency, elimination of seven CPU cores, and operation using only two low-power DPU cores.

Technology Category

Application Category

📝 Abstract
Serverless computing promises enhanced resource efficiency and lower user costs, yet is burdened by a heavyweight, CPU-bound data plane. Prior efforts exploiting shared memory reduce overhead locally but fall short when scaling across nodes. Furthermore, serverless environments can have unpredictable and large-scale multi-tenancy, leading to contention for shared network resources. We present Palladium, a DPU-centric serverless data plane that reduces the CPU burden and enables efficient, zero-copy communication in multi-tenant serverless clouds. Despite the limited general-purpose processing capability of the DPU cores, Palladium strategically exploits the DPU's potential by (1) offloading data transmission to high-performance NIC cores via RDMA, combined with intra-node shared memory to eliminate data copies across nodes, and (2) enabling cross-processor (CPU-DPU) shared memory to eliminate redundant data movement, which overwhelms wimpy DPU cores. At the core of Palladium is the DPU-enabled network engine (DNE) -- a lightweight reverse proxy that isolates RDMA resources from tenant functions, orchestrates inter-node RDMA flows, and enforces fairness under contention. To further reduce CPU involvement, Palladium performs early HTTP/TCP-to-RDMA transport conversion at the cloud ingress, bridging the protocol mismatch before client traffic enters the RDMA fabric, thus avoiding costly protocol translation along the critical path. We show that careful selection of RDMA primitives (i.e., two-sided instead of one-sided) significantly affects the zero-copy data plane. Our preliminary experimental results show that enabling DPU offloading in Palladium improves RPS by 20.9x. The latency is reduced by a factor of 21x in the best case, all the while saving up to 7 CPU cores, and only consuming two wimpy DPU cores.
Problem

Research questions and friction points this paper is trying to address.

Reducing CPU burden in multi-tenant serverless clouds
Enabling efficient zero-copy communication across nodes
Managing shared network resource contention at scale
Innovation

Methods, ideas, or system contributions that make the work stand out.

DPU-centric serverless data plane reduces CPU burden
Zero-copy communication via RDMA and shared memory
Early HTTP/TCP-to-RDMA conversion at cloud ingress
🔎 Similar Papers
No similar papers found.
Shixiong Qi
Shixiong Qi
Assistant Professor, University of Kentucky
Cloud Computing5G
S
Songyu Zhang
University of California, Riverside
K. K. Ramakrishnan
K. K. Ramakrishnan
Distinguished Professor of Computer Science and Engineering, University of California, Riverside
Computer Networking and Communications
D
D. Z. Tootaghaj
Hewlett Packard Labs
H
Hardik Soni
Hewlett Packard Labs
P
Puneet Sharma
Hewlett Packard Labs