🤖 AI Summary
Serverless computing suffers from CPU-intensive data-plane bottlenecks and multi-tenant network resource contention during cross-node scaling. This paper introduces a DPU-native data plane that offloads network processing via zero-copy RDMA communication, CPU–DPU shared-memory coordination, and early HTTP/TCP-to-RDMA protocol translation. To accommodate DPU computational constraints, we design a novel two-stage zero-copy data path. Furthermore, we build the first multi-tenant DPU-enabled network engine (DNE), supporting fine-grained RDMA resource isolation and fair scheduling across tenants. Experimental evaluation demonstrates a 20.9× improvement in request throughput, a 21× reduction in tail latency, elimination of seven CPU cores, and operation using only two low-power DPU cores.
📝 Abstract
Serverless computing promises enhanced resource efficiency and lower user costs, yet is burdened by a heavyweight, CPU-bound data plane. Prior efforts exploiting shared memory reduce overhead locally but fall short when scaling across nodes. Furthermore, serverless environments can have unpredictable and large-scale multi-tenancy, leading to contention for shared network resources. We present Palladium, a DPU-centric serverless data plane that reduces the CPU burden and enables efficient, zero-copy communication in multi-tenant serverless clouds. Despite the limited general-purpose processing capability of the DPU cores, Palladium strategically exploits the DPU's potential by (1) offloading data transmission to high-performance NIC cores via RDMA, combined with intra-node shared memory to eliminate data copies across nodes, and (2) enabling cross-processor (CPU-DPU) shared memory to eliminate redundant data movement, which overwhelms wimpy DPU cores. At the core of Palladium is the DPU-enabled network engine (DNE) -- a lightweight reverse proxy that isolates RDMA resources from tenant functions, orchestrates inter-node RDMA flows, and enforces fairness under contention. To further reduce CPU involvement, Palladium performs early HTTP/TCP-to-RDMA transport conversion at the cloud ingress, bridging the protocol mismatch before client traffic enters the RDMA fabric, thus avoiding costly protocol translation along the critical path. We show that careful selection of RDMA primitives (i.e., two-sided instead of one-sided) significantly affects the zero-copy data plane. Our preliminary experimental results show that enabling DPU offloading in Palladium improves RPS by 20.9x. The latency is reduced by a factor of 21x in the best case, all the while saving up to 7 CPU cores, and only consuming two wimpy DPU cores.