Residual Reinforcement Learning for Waste-Container Lifting Using Large-Scale Cranes with Underactuated Tools

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of low trajectory tracking accuracy and insufficient swing suppression in large hydraulic cranes used in urban waste recycling scenarios, which arise from underactuated end-effectors and stringent geometric tolerances. To overcome these issues, the authors propose a residual reinforcement learning framework that augments a nominal Cartesian controller with a proximal policy optimization (PPO)-trained residual policy to compensate for unmodeled dynamics and parametric uncertainties. By integrating prior control knowledge with data-driven learning—without requiring end-to-end training—the approach enhances robustness through admittance control, swing-aware damping, damped least-squares inverse kinematics, and domain randomization to improve generalization. Simulations in Isaac Lab demonstrate that the proposed method achieves significantly higher trajectory tracking accuracy, reduced oscillations, and markedly improved lifting success rates compared to the nominal controller alone.

Technology Category

Application Category

📝 Abstract
This paper studies the container lifting phase of a waste-container recycling task in urban environments, performed by a hydraulic loader crane equipped with an underactuated discharge unit, and proposes a residual reinforcement learning (RRL) approach that combines a nominal Cartesian controller with a learned residual policy. All experiments are conducted in simulation, where the task is characterized by tight geometric tolerances between the discharge-unit hooks and the container rings relative to the overall crane scale, making precise trajectory tracking and swing suppression essential. The nominal controller uses admittance control for trajectory tracking and pendulum-aware swing damping, followed by damped least-squares inverse kinematics with a nullspace posture term to generate joint velocity commands. A PPO-trained residual policy in Isaac Lab compensates for unmodeled dynamics and parameter variations, improving precision and robustness without requiring end-to-end learning from scratch. We further employ randomized episode initialization and domain randomization over payload properties, actuator gains, and passive joint parameters to enhance generalization. Simulation results demonstrate improved tracking accuracy, reduced oscillations, and higher lifting success rates compared to the nominal controller alone.
Problem

Research questions and friction points this paper is trying to address.

waste-container lifting
underactuated tools
large-scale cranes
trajectory tracking
swing suppression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual Reinforcement Learning
Underactuated Crane Control
Admittance Control
Domain Randomization
PPO
🔎 Similar Papers
No similar papers found.
Q
Qi Li
Robotics Research Lab, RPTU University of Kaiserslautern-Landau, Kaiserslautern, Germany
Karsten Berns
Karsten Berns
Professor für Informatik, University of Kaiserslautern
Robotik