Lightweight Learning from Actuation-Space Demonstrations via Flow Matching for Whole-Body Soft Robotic Grasping

๐Ÿ“… 2025-11-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Soft-bodied robots face challenges in whole-body grasping under uncertainty and rich contact conditions, typically requiring dense sensing and complex closed-loop feedback control. To address this, we propose a lightweight actuation-space learning framework. Our approach innovatively introduces Rectified Flowโ€”a flow-matching generative modelโ€”into actuation-space modeling, enabling direct learning of distributed, probabilistic control representations from only 30 deterministic demonstrations, without explicit dynamics models or real-time feedback. By leveraging the inherent passive adaptability of soft structures as functional control intelligence, the method significantly reduces computational load on the central controller. Experiments demonstrate a 97.5% grasping success rate and strong generalization to ยฑ33% object size variations and 20%โ€“200% execution time scaling. The framework achieves end-to-end control with minimal samples, low sensory requirements, and high robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Robotic grasping under uncertainty remains a fundamental challenge due to its uncertain and contact-rich nature. Traditional rigid robotic hands, with limited degrees of freedom and compliance, rely on complex model-based and heavy feedback controllers to manage such interactions. Soft robots, by contrast, exhibit embodied mechanical intelligence: their underactuated structures and passive flexibility of their whole body, naturally accommodate uncertain contacts and enable adaptive behaviors. To harness this capability, we propose a lightweight actuation-space learning framework that infers distributional control representations for whole-body soft robotic grasping, directly from deterministic demonstrations using a flow matching model (Rectified Flow),without requiring dense sensing or heavy control loops. Using only 30 demonstrations (less than 8% of the reachable workspace), the learned policy achieves a 97.5% grasp success rate across the whole workspace, generalizes to grasped-object size variations of +-33%, and maintains stable performance when the robot's dynamic response is directly adjusted by scaling the execution time from 20% to 200%. These results demonstrate that actuation-space learning, by leveraging its passive redundant DOFs and flexibility, converts the body's mechanics into functional control intelligence and substantially reduces the burden on central controllers for this uncertain-rich task.
Problem

Research questions and friction points this paper is trying to address.

Learning actuation-space control for soft robotic grasping
Addressing uncertainty in contact-rich grasping tasks
Reducing controller burden via passive mechanical intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flow matching model learns from actuation-space demonstrations
Lightweight framework reduces need for dense sensing
Converts robot body mechanics into control intelligence
๐Ÿ”Ž Similar Papers
No similar papers found.