FeasibleCap: Real-Time Embodiment Constraint Guidance for In-the-Wild Robot Demonstration Collection

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of collecting feasible demonstration trajectories during handheld gripper data acquisition without robot hardware involvement, where real-time assessment of trajectory executability on the target robot is typically infeasible, leading to abundant invalid demonstrations. To this end, the authors propose FeasibleCap, a system that— for the first time—integrates real-time reachability analysis, joint velocity limits, and collision checking based on the target robot’s kinematic model into a lightweight handheld capture paradigm. By leveraging on-device visual overlays and haptic feedback, FeasibleCap guides users to instantly correct infeasible motions, enabling closed-loop guidance without reliance on learned models or head-mounted displays. Experiments demonstrate that the approach significantly improves replay success rates and reduces the proportion of infeasible frames in pick-and-place and throwing tasks, with particularly pronounced gains in the latter, while maintaining strong cross-platform transferability.

Technology Category

Application Category

📝 Abstract
Gripper-in-hand data collection decouples demonstration acquisition from robot hardware, but whether a trajectory is executable on the target robot remains unknown until a separate replay-and-validate stage. Failed demonstrations therefore inflate the effective cost per usable trajectory through repeated collection, diagnosis, and validation. Existing collection-time feedback systems mitigate this issue but rely on head-worn AR/VR displays, robot-in-the-loop hardware, or learned dynamics models; real-time executability feedback has not yet been integrated into the gripper-in-hand data collection paradigm. We present \textbf{FeasibleCap}, a gripper-in-hand data collection system that brings real-time executability guidance into robot-free capture. At each frame, FeasibleCap checks reachability, joint-rate limits, and collisions against a target robot model and closes the loop through on-device visual overlays and haptic cues, allowing demonstrators to correct motions during collection without learned models, headsets, or robot hardware. On pick-and-place and tossing tasks, FeasibleCap improves replay success and reduces the fraction of infeasible frames, with the largest gains on tossing. Simulation experiments further indicate that enforcing executability constraints during collection does not sacrifice cross-embodiment transfer across robot platforms. Hardware designs and software are available at https://github.com/aod321/FeasibleCap.
Problem

Research questions and friction points this paper is trying to address.

robot demonstration
executability
embodiment constraints
data collection
feasibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

real-time executability feedback
gripper-in-hand data collection
embodiment constraint
robot-free demonstration
haptic guidance
🔎 Similar Papers
No similar papers found.
Z
Zi Yin
Tsinghua University, Beijing, China
F
Fanhong Li
Tsinghua University, Beijing, China
Y
Yun Gui
Tsinghua University, Beijing, China
Jia Liu
Jia Liu
Tsinghua University
Object RecognitionFace RecognitionPerceptual LearningfMRI