KARL: Kalman-Filter Assisted Reinforcement Learner for Dynamic Object Tracking and Grasping

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of eye-in-hand (EiH) robotic systems in tracking and grasping dynamic objects under real-world challenges—such as occlusion and rapid target motion—this paper proposes a perception–control closed-loop framework. First, a six-stage progressive reinforcement learning curriculum is designed to enhance policy generalization. Second, a robust Kalman filtering layer is embedded between visual perception and motion control to enable continuous, uncertainty-aware 6D pose estimation during transient target loss. Third, a failure-driven dynamic trajectory replanning mechanism with graceful retrying is introduced. Integrating deep reinforcement learning, visual servoing, and adaptive filtering, the method is validated in both simulation and real-world experiments. Results show a 32% improvement in grasping success rate, a 27% increase in execution speed, a doubling of operational workspace, and reliable recovery of tracking and grasping after up to 1.2 seconds of target occlusion.

Technology Category

Application Category

📝 Abstract
We present Kalman-filter Assisted Reinforcement Learner (KARL) for dynamic object tracking and grasping over eye-on-hand (EoH) systems, significantly expanding such systems capabilities in challenging, realistic environments. In comparison to the previous state-of-the-art, KARL (1) incorporates a novel six-stage RL curriculum that doubles the system's motion range, thereby greatly enhancing the system's grasping performance, (2) integrates a robust Kalman filter layer between the perception and reinforcement learning (RL) control modules, enabling the system to maintain an uncertain but continuous 6D pose estimate even when the target object temporarily exits the camera's field-of-view or undergoes rapid, unpredictable motion, and (3) introduces mechanisms to allow retries to gracefully recover from unavoidable policy execution failures. Extensive evaluations conducted in both simulation and real-world experiments qualitatively and quantitatively corroborate KARL's advantage over earlier systems, achieving higher grasp success rates and faster robot execution speed. Source code and supplementary materials for KARL will be made available at: https://github.com/arc-l/karl.
Problem

Research questions and friction points this paper is trying to address.

Enhances dynamic object tracking and grasping in EoH systems
Improves 6D pose estimation during object occlusion or fast motion
Enables recovery from policy failures in robotic grasping tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Six-stage RL curriculum doubles motion range
Kalman filter ensures continuous 6D pose estimation
Retry mechanisms recover from policy failures
🔎 Similar Papers
No similar papers found.