🤖 AI Summary
Fixed-frequency control in robotics compromises efficiency and robustness, lacking the dynamic adaptability observed in biological systems. To address this, we propose a time-adaptive control framework that jointly optimizes control actions and their execution durations via deep reinforcement learning, enabling autonomous, real-time adjustment of control frequency. Our method employs decoupled action and timing policy networks, trained exclusively in simulation, and achieves zero-shot transfer to physical RC cars and quadrupedal robots. Experiments demonstrate that the approach matches or exceeds the performance of fixed-frequency baselines while reducing average control frequency by 40–65%, significantly improving energy efficiency and environmental adaptability. Crucially, this work presents the first empirical validation of a learnable, transferable time-adaptive control paradigm on real-world, high-dynamic robotic platforms.
📝 Abstract
Fixed-frequency control in robotics imposes a trade-off between the efficiency of low-frequency control and the robustness of high-frequency control, a limitation not seen in adaptable biological systems. We address this with a reinforcement learning approach in which policies jointly select control actions and their application durations, enabling robots to autonomously modulate their control frequency in response to situational demands. We validate our method with zero-shot sim-to-real experiments on two distinct hardware platforms: a high-speed RC car and a quadrupedal robot. Our method matches or outperforms fixed-frequency baselines in terms of rewards while significantly reducing the control frequency and exhibiting adaptive frequency control under real-world conditions.