π€ AI Summary
This paper exposes a fundamental trade-off between performance and cost in mainstream auto-scaling strategies for serverless computing: frequent instance cold starts and shutdowns incur 10β40% additional CPU overhead, while memory allocation exhibits 2β10Γ redundancy; existing optimizations often sacrifice significant latency. To address this, the authors develop a reproducible, transparent evaluation frameworkβopen-sourcing a system that accurately emulates the control-plane behaviors of AWS Lambda and Google Cloud Run, integrated with real-world deployments and large-scale simulations. This enables the first systematic, quantitative characterization of latency, memory, and CPU overhead under realistic synchronous and asynchronous workloads. Key contributions include: (i) precise identification of auto-scaling efficiency bottlenecks; (ii) formulation of novel, overhead-aware scaling design principles; and (iii) provision of an empirical foundation and methodological guidance for building high-performance, cost-efficient serverless control planes.
π Abstract
Serverless computing is transforming cloud application development, but the performance-cost trade-offs of control plane designs remain poorly understood due to a lack of open, cross-platform benchmarks and detailed system analyses. In this work, we address these gaps by designing a serverless system that approximates the scaling behaviors of commercial providers, including AWS Lambda and Google Cloud Run. We systematically compare the performance and cost-efficiency of both synchronous and asynchronous autoscaling policies by replaying real-world workloads and varying key autoscaling parameters.
We demonstrate that our open-source systems can closely replicate the operational characteristics of commercial platforms, enabling reproducible and transparent experimentation. By evaluating how autoscaling parameters affect latency, memory usage, and CPU overhead, we reveal several key findings. First, we find that serverless systems exhibit significant computational overhead due to instance churn equivalent to 10-40% of the CPU cycles spent on request handling, primarily originating from worker nodes. Second, we observe high memory allocation due to scaling policy: 2-10 times more than actively used. Finally, we demonstrate that reducing these overheads typically results in significant performance degradation in the current systems, underscoring the need for new, cost-efficient autoscaling strategies. Additionally, we employ a hybrid methodology that combines real control plane deployments with large-scale simulation to extend our evaluation closer to a production scale, thereby bridging the gap between small research clusters and real-world environments.