🤖 AI Summary
Addressing three key challenges in autonomous mobile robots (AMRs)—strong coupling between physical and cyber subsystem power consumption, spatiotemporal locality of environment perception and navigation, and difficulty in hardware-software co-optimization—this paper proposes pNav, the first millisecond-scale, hardware-software co-predictive and dynamic power management system for AMRs. pNav integrates the ROS navigation stack, 2D LiDAR and camera perception data, and DVFS-based hardware control to model navigation behavior’s spatiotemporal locality in real time, enabling dynamic optimization of both algorithmic strategies and hardware configurations. Experimental evaluation demonstrates a 38.1% reduction in total system power consumption, 96.2% accuracy in power prediction, and strict preservation of navigation accuracy and operational safety. The core contribution lies in the first realization of fine-grained, low-latency, closed-loop feedback–driven cross-layer energy-efficiency co-optimization specifically tailored for AMR workloads.
📝 Abstract
This paper presents pNav, a novel power-management system that significantly enhances the power/energy-efficiency of Autonomous Mobile Robots (AMRs) by jointly optimizing their physical/mechanical and cyber subsystems. By profiling AMRs'power consumption, we identify three challenges in achieving CPS (cyber-physical system) power-efficiency that involve both cyber (C) and physical (P) subsystems: (1) variabilities of system power consumption breakdown, (2) environment-aware navigation locality, and (3) coordination of C and P subsystems. pNav takes a multi-faceted approach to achieve power-efficiency of AMRs. First, it integrates millisecond-level power consumption prediction for both C and P subsystems. Second, it includes novel real-time modeling and monitoring of spatial and temporal navigation localities for AMRs. Third, it supports dynamic coordination of AMR software (navigation, detection) and hardware (motors, DVFS driver) configurations. pNav is prototyped using the Robot Operating System (ROS) Navigation Stack, 2D LiDAR, and camera. Our in-depth evaluation with a real robot and Gazebo environments demonstrates a>96% accuracy in predicting power consumption and a 38.1% reduction in power consumption without compromising navigation accuracy and safety.