🤖 AI Summary
To address the high deployment overhead and slow startup latency of containers in bandwidth-constrained edge computing environments, this paper proposes a dual-weight scheduling mechanism that is layer-aware and resource-adaptive. The method innovatively integrates container image layer sharing characteristics with real-time node resource load dynamics, establishing a scheduling framework based on image layer metadata analysis, multi-dimensional node scoring, and dynamically weighted load balancing—deeply embedded within the Kubernetes scheduler. Experimental evaluation demonstrates that, compared to the default Kubernetes scheduler, the approach reduces container image download volume by 37.2%, decreases average container startup time by 29.5%, and improves image layer sharing rate by 3.8×. This work is the first to jointly optimize image layer sharing benefits and real-time resource states, significantly enhancing container deployment efficiency and resource utilization in edge computing scenarios.
📝 Abstract
Lightweight containers provide an efficient approach for deploying computation-intensive applications in network edge. The layered storage structure of container images can further reduce the deployment cost and container startup time. Existing researches discuss layer sharing scheduling theoretically but with little attention paid to the practical implementation. To fill in this gap, we propose and implement a Layer-aware and Resource-adaptive container Scheduler (LRScheduler) in edge computing. Specifically, we first utilize container image layer information to design and implement a node scoring and container scheduling mechanism. This mechanism can effectively reduce the download cost when deploying containers, which is very important in edge computing with limited bandwidth. Then, we design a dynamically weighted and resource-adaptive mechanism to enhance load balancing in edge clusters, increasing layer sharing scores when resource load is low to use idle resources effectively. Our scheduler is built on the scheduling framework of Kubernetes, enabling full process automation from task information acquisition to container dep=loyment. Testing on a real system has shown that our design can effectively reduce the container deployment cost as compared with the default scheduler.