🤖 AI Summary
To address insufficient robustness in robotic manipulation arising from uncertainties in contact dynamics and object geometry, this paper proposes a co-optimization framework for robotic morphology and control policies, with caging serving as both a geometric constraint and task objective. The method introduces a hierarchical coupled architecture: a lower layer employs reinforcement learning to optimize motion policies, while an upper layer conducts multi-task Bayesian optimization to search for optimal manipulator morphologies. Crucially, we propose the “minimum escape energy” as a novel cross-layer robustness metric, enabling end-to-end, caging-driven joint optimization. The approach integrates geometry-aware caging verification with an energy-based robustness quantification. Evaluated under four distinct perturbation scenarios, the method achieves significantly improved manipulation success rates, empirically demonstrating that morphology-policy co-optimization substantially enhances robustness in uncertain environments.
📝 Abstract
Uncertainties in contact dynamics and object geometry remain significant barriers to robust robotic manipulation. Caging mitigates these uncertainties by constraining an object's mobility without requiring precise contact modeling. However, existing caging research has largely treated morphology and policy optimization as separate problems, overlooking their inherent synergy. In this paper, we introduce CageCoOpt, a hierarchical framework that jointly optimizes manipulator morphology and control policy for robust manipulation. The framework employs reinforcement learning for policy optimization at the lower level and multi-task Bayesian optimization for morphology optimization at the upper level. A robustness metric in caging, Minimum Escape Energy, is incorporated into the objectives of both levels to promote caging configurations and enhance manipulation robustness. The evaluation results through four manipulation tasks demonstrate that co-optimizing morphology and policy improves success rates under uncertainties, establishing caging-guided co-optimization as a viable approach for robust manipulation.