UniForce: A Unified Latent Force Model for Robot Manipulation with Diverse Tactile Sensors

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving universal force perception across heterogeneous tactile sensors, which differ significantly in sensing mechanisms and physical morphology. To overcome this limitation, the authors propose UniForce, a framework that jointly models inverse (image-to-force) and forward (force-to-image) dynamics and constructs a shared latent force space across sensors under static force equilibrium constraints. This approach enables cross-device alignment without requiring external force/torque sensors. Notably, UniForce achieves zero-shot force perception transfer among multimodal tactile sensors for the first time, supporting plug-and-play deployment in downstream tasks and facilitating vision–touch–language–action coordinated manipulation. Experiments demonstrate substantial improvements in force estimation accuracy across diverse platforms—including GelSight, TacTip, and uSkin—and successful application in a robot wiping task requiring cross-sensor coordination.

Technology Category

Application Category

📝 Abstract
Force sensing is essential for dexterous robot manipulation, but scaling force-aware policy learning is hindered by the heterogeneity of tactile sensors. Differences in sensing principles (e.g., optical vs. magnetic), form factors, and materials typically require sensor-specific data collection, calibration, and model training, thereby limiting generalisability. We propose UniForce, a novel unified tactile representation learning framework that learns a shared latent force space across diverse tactile sensors. UniForce reduces cross-sensor domain shift by jointly modeling inverse dynamics (image-to-force) and forward dynamics (force-to-image), constrained by force equilibrium and image reconstruction losses to produce force-grounded representations. To avoid reliance on expensive external force/torque (F/T) sensors, we exploit static equilibrium and collect force-paired data via direct sensor--object--sensor interactions, enabling cross-sensor alignment with contact force. The resulting universal tactile encoder can be plugged into downstream force-aware robot manipulation tasks with zero-shot transfer, without retraining or finetuning. Extensive experiments on heterogeneous tactile sensors including GelSight, TacTip, and uSkin, demonstrate consistent improvements in force estimation over prior methods, and enable effective cross-sensor coordination in Vision-Tactile-Language-Action (VTLA) models for a robotic wiping task. Code and datasets will be released.
Problem

Research questions and friction points this paper is trying to address.

tactile sensors
force sensing
domain shift
robot manipulation
sensor heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

unified tactile representation
latent force space
cross-sensor alignment
zero-shot transfer
force-grounded dynamics modeling