A Conflict-Aware Resource Management Framework for the Computing Continuum

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address resource scheduling conflicts arising from device heterogeneity and decentralization in the edge–fog–cloud continuum, this paper proposes an adaptive conflict resolution framework based on deep reinforcement learning (DRL). It is the first work to apply DRL to computational continuum resource orchestration, enabling dynamic conflict detection, online policy optimization, and cross-layer service coordination via real-time performance monitoring and state modeling. Unlike conventional static policies, our approach supports autonomous mediation grounded in historical states and real-time feedback, effectively breaking persistent inter-agent conflict cycles. Evaluated on a Kubernetes testbed, the framework significantly reduces conflict cycle incidence, improves resource reallocation efficiency, and enhances system resilience and scalability—demonstrating strong suitability for highly dynamic edge computing environments.

Technology Category

Application Category

📝 Abstract
The increasing device heterogeneity and decentralization requirements in the computing continuum (i.e., spanning edge, fog, and cloud) introduce new challenges in resource orchestration. In such environments, agents are often responsible for optimizing resource usage across deployed services. However, agent decisions can lead to persistent conflict loops, inefficient resource utilization, and degraded service performance. To overcome such challenges, we propose a novel framework for adaptive conflict resolution in resource-oriented orchestration using a Deep Reinforcement Learning (DRL) approach. The framework enables handling resource conflicts across deployments and integrates a DRL model trained to mediate such conflicts based on real-time performance feedback and historical state information. The framework has been prototyped and validated on a Kubernetes-based testbed, illustrating its methodological feasibility and architectural resilience. Preliminary results show that the framework achieves efficient resource reallocation and adaptive learning in dynamic scenarios, thus providing a scalable and resilient solution for conflict-aware orchestration in the computing continuum.
Problem

Research questions and friction points this paper is trying to address.

Addresses persistent conflict loops in resource orchestration
Optimizes resource usage across heterogeneous computing environments
Mediates conflicts using deep reinforcement learning and real-time feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Reinforcement Learning for conflict resolution
Real-time feedback and historical data integration
Kubernetes-based scalable resilient orchestration framework
🔎 Similar Papers
No similar papers found.