🤖 AI Summary
To address the challenges of fine-grained force control and unintuitive human–robot interaction in robotic dexterous manipulation, this paper proposes a semantics-driven bilateral force modulation framework. Methodologically, we introduce the first joint modeling of natural language instructions (e.g., “gently grasp the cup”) with bilateral teleoperation force/motion signals via a multimodal Transformer architecture, integrating the SigLIP language encoder, action tokenization, and fused perception of joint position, velocity, and torque. Our key contributions include: (i) the first end-to-end mapping from linguistic intent to force-level control, enabling real-time, interpretable, and bimanual force modulation; and (ii) empirical validation on single-hand cup-stacking and dual-hand sponge-squeezing tasks, where multi-level force instructions are accurately reproduced. SigLIP significantly improves language–force alignment accuracy, demonstrating the efficacy of semantic-guided imitation learning for force control.
📝 Abstract
We present Bi-LAT, a novel imitation learning framework that unifies bilateral control with natural language processing to achieve precise force modulation in robotic manipulation. Bi-LAT leverages joint position, velocity, and torque data from leader-follower teleoperation while also integrating visual and linguistic cues to dynamically adjust applied force. By encoding human instructions such as"softly grasp the cup"or"strongly twist the sponge"through a multimodal Transformer-based model, Bi-LAT learns to distinguish nuanced force requirements in real-world tasks. We demonstrate Bi-LAT's performance in (1) unimanual cup-stacking scenario where the robot accurately modulates grasp force based on language commands, and (2) bimanual sponge-twisting task that requires coordinated force control. Experimental results show that Bi-LAT effectively reproduces the instructed force levels, particularly when incorporating SigLIP among tested language encoders. Our findings demonstrate the potential of integrating natural language cues into imitation learning, paving the way for more intuitive and adaptive human-robot interaction. For additional material, please visit: https://mertcookimg.github.io/bi-lat/