🤖 AI Summary
To address the inefficiency and high intermediate storage costs (24%–99% of total cost) in data-intensive serverless workflows—caused by reliance on external storage systems such as S3 or ElastiCache—this paper proposes Zipline, the first memory-to-memory direct-transfer architecture for serverless computing. Zipline eliminates intermediate storage overhead entirely via an API-compatible mechanism: sender-side buffering and receiver-side on-demand pulling, while preserving semantic consistency. Integrated with vHive and Knative, it enables deep co-design with load balancing and auto-scaling, supporting memory-reference passing and dynamic memory-access scheduling. Evaluation shows that Zipline reduces cost by 2–5× and execution time by 1.3–3.4× compared to S3; versus ElastiCache, it achieves 17–772× cost reduction with 2%–5% performance improvement.
📝 Abstract
Serverless computing is a popular cloud deployment paradigm where developers implement applications as workflows of functions that invoke each other. Cloud providers automatically scale function instances on demand and forward workflow requests to appropriate instances. However, current serverless clouds lack efficient cross-function data transfer, limiting the execution of data-intensive applications. Functions often rely on third-party services like AWS S3, AWS ElastiCache, or multi-tier solutions for intermediate data transfers, which introduces inefficiencies. We demonstrate that such through-storage transfers make data-intensive deployments economically impractical, with storage costs comprising more than 24-99% of the total serverless bill. To address this, we introduce Zipline, a fast, API-preserving data communication method for serverless platforms. Zipline enables direct function-to-function transfers, where the sender function buffers payloads in memory and sends a reference to the receiver. The receiver retrieves the data directly from the sender's memory, guided by the load balancer and autoscaler. Zipline integrates seamlessly with existing autoscaling, maintains invocation semantics, and eliminates the costs and overheads of intermediate services. We prototype Zipline in vHive/Knative on AWS EC2 nodes, demonstrating significant improvements. Zipline reduces costs and enhances latency and bandwidth compared to AWS S3 (the lowest-cost solution) and ElastiCache (the highest-performance solution). On real-world applications, Zipline lowers costs by 2-5x and reduces execution times by 1.3-3.4x versus S3. Compared to ElastiCache, Zipline achieves 17-772x cost reductions while improving performance by 2-5%.