Zero Shot Deformation Reconstruction for Soft Robots Using a Flexible Sensor Array and Cage Based 3D Gaussian Modeling

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a zero-shot, real-time deformation reconstruction method for soft robots that operates without visual supervision or task-specific retraining. By leveraging a flexible piezoresistive tactile sensor array and a static STL-based geometric proxy, the approach employs a graph attention network to map localized tactile signals into cage-based control parameters, which drive a 3D Gaussian deformation model to produce globally consistent and structurally continuous shape reconstructions. The framework further enables photorealistic RGB rendering in real time. To the best of our knowledge, this is the first method to achieve vision-free, data-agnostic deformation reconstruction for soft robots. It demonstrates strong zero-shot generalization, attaining 0.67 IoU, 0.65 SSIM, and a Chamfer distance of 3.48 mm under bending and twisting motions.

Technology Category

Application Category

📝 Abstract
We present a zero-shot deformation reconstruction framework for soft robots that operates without any visual supervision at inference time. In this work, zero-shot deformation reconstruction is defined as the ability to infer object-wide deformations on previously unseen soft robots without collecting object-specific deformation data or performing any retraining during deployment. Our method assumes access to a static geometric proxy of the undeformed object, which can be obtained from a STL model. During operation, the system relies exclusively on tactile sensing, enabling camera-free deformation inference. The proposed framework integrates a flexible piezoresistive sensor array with a geometry-aware, cage-based 3D Gaussian deformation model. Local tactile measurements are mapped to low-dimensional cage control signals and propagated to dense Gaussian primitives to generate globally consistent shape deformations. A graph attention network regresses cage displacements from tactile input, enforcing spatial smoothness and structural continuity via boundary-aware propagation. Given only a nominal geometric proxy and real-time tactile signals, the system performs zero-shot deformation reconstruction of unseen soft robots in bending and twisting motions, while rendering photorealistic RGB in real time. It achieves 0.67 IoU, 0.65 SSIM, and 3.48 mm Chamfer distance, demonstrating strong zero-shot generalization through explicit coupling of tactile sensing and structured geometric deformation.
Problem

Research questions and friction points this paper is trying to address.

zero-shot deformation reconstruction
soft robots
tactile sensing
camera-free deformation inference
geometric proxy
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot deformation reconstruction
flexible sensor array
cage-based 3D Gaussian modeling
tactile sensing
graph attention network
🔎 Similar Papers
No similar papers found.
L
Linrui Shou
University of Notre Dame
Z
Zilang Chen
University of Notre Dame
W
Wenjia Xu
University of Notre Dame
Yiyue Luo
Yiyue Luo
Assistant Professor, University of Washington
Intelligent TextilesDigital FabricationHCIApplied Machine Learning
T
Tingyu Cheng
University of Notre Dame