Dynamic object goal pushing with mobile manipulators through model-free constrained reinforcement learning

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses dynamic pushing of objects with unknown physical properties (mass, material, friction) using a mobile manipulator. We propose a model-free Constrained Reinforcement Learning (CRL) framework for end-to-end pushing control from only object pose observations. Methodologically, we design a hierarchical action space to coordinate the manipulator and mobile base, and introduce a pose-feedback-driven closed-loop policy enabling robust contact-rich interaction and anti-tipping adaptation—without object modeling or system identification. To our knowledge, this is the first application of CRL to dynamic pushing control, explicitly balancing positional accuracy and orientation robustness under physical constraints. In simulation, the method achieves a 91.35% success rate; on physical hardware, it attains ≥80% success across diverse unknown objects—including varying materials, masses, sizes, and shapes—demonstrating strong generalization and stability.

Technology Category

Application Category

📝 Abstract
Non-prehensile pushing to move and reorient objects to a goal is a versatile loco-manipulation skill. In the real world, the object's physical properties and friction with the floor contain significant uncertainties, which makes the task challenging for a mobile manipulator. In this paper, we develop a learning-based controller for a mobile manipulator to move an unknown object to a desired position and yaw orientation through a sequence of pushing actions. The proposed controller for the robotic arm and the mobile base motion is trained using a constrained Reinforcement Learning (RL) formulation. We demonstrate its capability in experiments with a quadrupedal robot equipped with an arm. The learned policy achieves a success rate of 91.35% in simulation and at least 80% on hardware in challenging scenarios. Through our extensive hardware experiments, we show that the approach demonstrates high robustness against unknown objects of different masses, materials, sizes, and shapes. It reactively discovers the pushing location and direction, thus achieving contact-rich behavior while observing only the pose of the object. Additionally, we demonstrate the adaptive behavior of the learned policy towards preventing the object from toppling.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Object Manipulation
Uncertainty Handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-free Reinforcement Learning
Adaptive Object Manipulation
Robustness to Uncertainty