🤖 AI Summary
This work proposes a zero-shot, real-time deformation reconstruction method for soft robots that operates without visual supervision or task-specific retraining. By leveraging a flexible piezoresistive tactile sensor array and a static STL-based geometric proxy, the approach employs a graph attention network to map localized tactile signals into cage-based control parameters, which drive a 3D Gaussian deformation model to produce globally consistent and structurally continuous shape reconstructions. The framework further enables photorealistic RGB rendering in real time. To the best of our knowledge, this is the first method to achieve vision-free, data-agnostic deformation reconstruction for soft robots. It demonstrates strong zero-shot generalization, attaining 0.67 IoU, 0.65 SSIM, and a Chamfer distance of 3.48 mm under bending and twisting motions.
📝 Abstract
We present a zero-shot deformation reconstruction framework for soft robots that operates without any visual supervision at inference time. In this work, zero-shot deformation reconstruction is defined as the ability to infer object-wide deformations on previously unseen soft robots without collecting object-specific deformation data or performing any retraining during deployment. Our method assumes access to a static geometric proxy of the undeformed object, which can be obtained from a STL model. During operation, the system relies exclusively on tactile sensing, enabling camera-free deformation inference. The proposed framework integrates a flexible piezoresistive sensor array with a geometry-aware, cage-based 3D Gaussian deformation model. Local tactile measurements are mapped to low-dimensional cage control signals and propagated to dense Gaussian primitives to generate globally consistent shape deformations. A graph attention network regresses cage displacements from tactile input, enforcing spatial smoothness and structural continuity via boundary-aware propagation. Given only a nominal geometric proxy and real-time tactile signals, the system performs zero-shot deformation reconstruction of unseen soft robots in bending and twisting motions, while rendering photorealistic RGB in real time. It achieves 0.67 IoU, 0.65 SSIM, and 3.48 mm Chamfer distance, demonstrating strong zero-shot generalization through explicit coupling of tactile sensing and structured geometric deformation.