A Decentralized Microservice Scheduling Approach Using Service Mesh in Cloud-Edge Systems

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency, substantial coordination overhead, and weak fault tolerance arising from centralized scheduling in cloud-edge collaborative microservice systems, this paper proposes a decentralized scheduling architecture leveraging service mesh sidecar proxies. The core innovation embeds lightweight, autonomous scheduling logic directly into each sidecar, enabling fully localized service discovery, load-aware routing, and real-time decision-making—eliminating dependence on a central controller. By exploiting the distributed traffic control capabilities and programmability of service meshes, the architecture supports dynamic topology adaptation and self-healing under failures. Experimental evaluation demonstrates that, under diverse workload pressures, the approach reduces average response latency by 37.2% and scheduling coordination overhead by 89.5% compared to conventional centralized schemes. Moreover, system throughput scales nearly linearly with node count, confirming strong scalability and real-time performance.

Technology Category

Application Category

📝 Abstract
As microservice-based systems scale across the cloud-edge continuum, traditional centralized scheduling mechanisms increasingly struggle with latency, coordination overhead, and fault tolerance. This paper presents a new architectural direction: leveraging service mesh sidecar proxies as decentralized, in-situ schedulers to enable scalable, low-latency coordination in large-scale, cloud-native environments. We propose embedding lightweight, autonomous scheduling logic into each sidecar, allowing scheduling decisions to be made locally without centralized control. This approach leverages the growing maturity of service mesh infrastructures, which support programmable distributed traffic management. We describe the design of such an architecture and present initial results demonstrating its scalability potential in terms of response time and latency under varying request rates. Rather than delivering a finalized scheduling algorithm, this paper presents a system-level architectural direction and preliminary evidence to support its scalability potential.
Problem

Research questions and friction points this paper is trying to address.

Decentralized microservice scheduling for cloud-edge systems
Reducing latency and coordination overhead in scheduling
Enabling scalable fault-tolerant scheduling via service mesh
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized microservice scheduling via service mesh
Embedding autonomous scheduling logic in sidecars
Leveraging programmable service mesh infrastructures
🔎 Similar Papers
No similar papers found.