🤖 AI Summary
To address high end-to-end latency in cross-platform Function-as-a-Service (FaaS) workflows—caused by hardcoded logic, cascading cold starts, inter-function latency, and data-download bottlenecks on critical paths—this paper proposes a federated serverless workflow orchestration middleware. The middleware enables collaborative execution across public and private FaaS platforms, introducing the first dynamic heterogeneous-platform workflow reorchestration mechanism. It integrates function prewarming and data prefetching to eliminate cold starts and data loading from critical paths. Leveraging cross-platform API abstraction and a lightweight orchestration engine, it achieves transparent, platform-agnostic scheduling. Evaluated in realistic deployments, the approach reduces end-to-end latency by over 50%, significantly surpassing conventional serverless workflow performance limits. Results validate the effectiveness and scalability of federated execution coupled with proactive prefetching optimization.
📝 Abstract
Function-as-a-Service (FaaS) is a popular cloud computing model in which applications are implemented as workflows of multiple independent functions. While cloud providers usually offer composition services for such workflows, they do not support cross-platform workflows forcing developers to hardcode the composition logic. Furthermore, FaaS workflows tend to be slow due to cascading cold starts, inter-function latency, and data download latency on the critical path. In this paper, we propose GEOFF, a serverless choreography middleware that executes FaaS workflows across different public and private FaaS platforms, including ad-hoc workflow recomposition. Furthermore, GEOFF supports function pre-warming and data pre-fetching. This minimizes end-to-end workflow latency by taking cold starts and data download latency off the critical path. In experiments with our proof-of-concept prototype and a realistic application, we were able to reduce end-to-end latency by more than 50%.