🤖 AI Summary
This work addresses the limitations of existing container autoscaling mechanisms, which are predominantly workload-centric and lack awareness of energy consumption and carbon emissions, thereby falling short of the efficiency demands of green cloud-edge computing. To bridge this gap, we propose the first AI-native, carbon-aware orchestration system that integrates multi-level telemetry data—from power distribution units to Kubernetes containers—into a unified observability layer. Leveraging a machine learning model that jointly captures workload, performance, and power characteristics, combined with a Model Predictive Control (MPC) strategy, our approach optimizes energy usage while meeting service latency requirements. Evaluated on a real-world production-grade platform, the proposed method reduces energy consumption by 34.68% compared to conventional Horizontal Pod Autoscaler (HPA) solutions, marking the first demonstration of full-stack energy-aware, AI-driven green scheduling.
📝 Abstract
Future networks must meet stringent requirements while operating within tight energy and carbon constraints. Current autoscaling mechanisms remain workload-centric and infrastructure-siloed, and are largely unaware of their environmental impact. We present NeuroScaler, an AI-native, energy-efficient, and carbon-aware orchestrator for green cloud and edge networks. NeuroScaler aggregates multi-tier telemetry, from Power Distribution Units (PDUs) through bare-metal servers to virtualized infrastructure with containers managed by Kubernetes, using distinct energy and computing metrics at each tier. It supports several machine learning pipelines that link load, performance, and power. Within this unified observability layer, a model-predictive control policy optimizes energy use while meeting service-level objectives. In a real testbed with production-grade servers supporting real services, NeuroScaler reduces energy consumption by 34.68% compared to the Horizontal Pod Autoscaler (HPA) while maintaining target latency.