ControlTac: Force- and Position-Controlled Tactile Data Augmentation with a Single Reference Image

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High acquisition cost, limited scale, and poor cross-sensor generalization of tactile data, coupled with insufficient realism and transferability of existing augmentation methods, hinder robust tactile perception. Method: We propose a low-cost, single-image-driven, two-stage controllable tactile image generation framework. For the first time, contact force and pose are explicitly incorporated as physical control signals into the generative process. A conditional diffusion model jointly optimizes a physics-constrained encoder and a tactile appearance disentanglement module, enabling physically interpretable and task-transferable data augmentation—requiring no additional hardware, only one reference image and prior force/pose information. Contribution/Results: The framework synthesizes high-fidelity, diverse tactile images. Evaluated across classification, reconstruction, and manipulation tasks, it achieves an average accuracy improvement of 12.3%, significantly enhancing model robustness and cross-device adaptability.

Technology Category

Application Category

📝 Abstract
Vision-based tactile sensing has been widely used in perception, reconstruction, and robotic manipulation. However, collecting large-scale tactile data remains costly due to the localized nature of sensor-object interactions and inconsistencies across sensor instances. Existing approaches to scaling tactile data, such as simulation and free-form tactile generation, often suffer from unrealistic output and poor transferability to downstream tasks.To address this, we propose ControlTac, a two-stage controllable framework that generates realistic tactile images conditioned on a single reference tactile image, contact force, and contact position. With those physical priors as control input, ControlTac generates physically plausible and varied tactile images that can be used for effective data augmentation. Through experiments on three downstream tasks, we demonstrate that ControlTac can effectively augment tactile datasets and lead to consistent gains. Our three real-world experiments further validate the practical utility of our approach. Project page: https://dongyuluo.github.io/controltac.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic tactile images from single reference
Addressing costly large-scale tactile data collection
Improving tactile data augmentation for downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates tactile images with physical priors
Uses single reference image and force control
Enhances dataset with realistic tactile augmentation
🔎 Similar Papers
No similar papers found.
D
Dongyu Luo
University of Maryland, College Park
Kelin Yu
Kelin Yu
University of Maryland
Robot LearningRobot ManipulationTactile Sensing
A
Amir-Hossein Shahidzadeh
University of Maryland, College Park
Cornelia Fermuller
Cornelia Fermuller
Research Scientist, Computer Vision and Human Vision, University of Maryland
Robot PerceptionComputer VisionEvent based VisionBio-inspired Computation
Y
Yiannis Aloimonos
University of Maryland, College Park