REACH: Reinforcement Learning for Adaptive Microservice Rescheduling in the Cloud-Edge Continuum

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of simultaneously achieving low latency and stability in microservice deployment under heterogeneous and dynamic resource conditions across the cloud–edge continuum, this paper proposes an adaptive microservice rescheduling algorithm based on reinforcement learning. The method pioneers the application of deep reinforcement learning to cloud–edge collaborative scenarios, enabling autonomous decision-making—guided by real-time resource-state feedback—on service migration and redeployment to respond online to workload fluctuations and performance variations. Evaluated on a realistic distributed testbed, the approach reduces end-to-end average latency by 7.9%, 10.0%, and 8.0% for three benchmark microservice applications, respectively, while significantly suppressing latency jitter. This work establishes a scalable, adaptive paradigm for service orchestration in dynamic edge environments.

Technology Category

Application Category

📝 Abstract
Cloud computing, despite its advantages in scalability, may not always fully satisfy the low-latency demands of emerging latency-sensitive pervasive applications. The cloud-edge continuum addresses this by integrating the responsiveness of edge resources with cloud scalability. Microservice Architecture (MSA) characterized by modular, loosely coupled services, aligns effectively with this continuum. However, the heterogeneous and dynamic computing resource poses significant challenges to the optimal placement of microservices. We propose REACH, a novel rescheduling algorithm that dynamically adapts microservice placement in real time using reinforcement learning to react to fluctuating resource availability, and performance variations across distributed infrastructures. Extensive experiments on a real-world testbed demonstrate that REACH reduces average end-to-end latency by 7.9%, 10%, and 8% across three benchmark MSA applications, while effectively mitigating latency fluctuations and spikes.
Problem

Research questions and friction points this paper is trying to address.

Optimizing microservice placement in heterogeneous cloud-edge environments
Addressing latency fluctuations in dynamic distributed infrastructures
Adapting to changing resource availability for latency-sensitive applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning adapts microservice placement dynamically
Real-time rescheduling algorithm for cloud-edge continuum
Mitigates latency fluctuations across distributed infrastructures
🔎 Similar Papers
No similar papers found.