🤖 AI Summary
Visual motor policies commonly neglect adaptive compliance modulation, leading to poor trade-offs between contact force suppression and trajectory tracking accuracy. Method: We propose the Adaptive Compliance Policy (ACP) framework, which learns spatiotemporal compliance configurations directly from human demonstrations—replacing fixed or pre-specified stiffness assumptions with task-driven, online compliance adaptation. ACP integrates diffusion-model-guided policy learning, demonstration-based compliance estimation, and joint visual–tactile representation modeling. Contribution/Results: Evaluated on contact-intensive manipulation tasks, ACP achieves over 50% performance improvement over state-of-the-art methods, significantly enhancing robotic robustness in uncertain, unstructured environments through compliant, force-aware interaction.
📝 Abstract
Compliance plays a crucial role in manipulation, as it balances between the concurrent control of position and force under uncertainties. Yet compliance is often overlooked by today's visuomotor policies that solely focus on position control. This paper introduces Adaptive Compliance Policy (ACP), a novel framework that learns to dynamically adjust system compliance both spatially and temporally for given manipulation tasks from human demonstrations, improving upon previous approaches that rely on pre-selected compliance parameters or assume uniform constant stiffness. However, computing full compliance parameters from human demonstrations is an ill-defined problem. Instead, we estimate an approximate compliance profile with two useful properties: avoiding large contact forces and encouraging accurate tracking. Our approach enables robots to handle complex contact-rich manipulation tasks and achieves over 50% performance improvement compared to state-of-the-art visuomotor policy methods. For result videos, see https://adaptive-compliance.github.io/