đ¤ AI Summary
Addressing real-time, partially observable multi-robot decision-making under dynamic environments. Method: We propose a hybrid hierarchical architecture that tightly integrates model-free reinforcement learning (PPO/SAC) into the classical robotics stack via sub-behavior decomposition and heuristic scheduling, enabling end-to-end decision-making; augmented by multi-fidelity sim2real transfer (Gazebo â physical platform) and co-optimization of motion planning and state estimation modules. Contribution/Results: Our work introduces the first tightly coupled integration mechanism between RL modules and conventional robot software architectures, alongside a lightweight generalization strategyââsub-behavior learning + heuristic selectionââthat ensures millisecond-level latency while significantly improving robustness and environmental adaptability. The system secured first place in the Shield Challenge of the RoboCup Standard Platform League 2024. Real-robot evaluations demonstrate high task success rates, low end-to-end latency, and deployment stability.
đ Abstract
Robot decision-making in partially observable, real-time, dynamic, and multi-agent environments remains a difficult and unsolved challenge. Model-free reinforcement learning (RL) is a promising approach to learning decision-making in such domains, however, end-to-end RL in complex environments is often intractable. To address this challenge in the RoboCup Standard Platform League (SPL) domain, we developed a novel architecture integrating RL within a classical robotics stack, while employing a multi-fidelity sim2real approach and decomposing behavior into learned sub-behaviors with heuristic selection. Our architecture led to victory in the 2024 RoboCup SPL Challenge Shield Division. In this work, we fully describe our system's architecture and empirically analyze key design decisions that contributed to its success. Our approach demonstrates how RL-based behaviors can be integrated into complete robot behavior architectures.