PokeFlex: A Real-World Dataset of Volumetric Deformable Objects for Robotics

📅 2024-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deformable-object datasets suffer from insufficient samples and missing ground-truth annotations for 3D reconstruction and closed-loop control, limiting their applicability to robotic manipulation tasks. To address this, we introduce the first ground-truth–level, multimodal deformable-object dataset explicitly designed for robotics manipulation. It comprises synchronized RGB-D images, point clouds, textured 3D meshes, contact forces, and pose annotations for 18 soft objects with diverse stiffnesses and geometries under drop and poke deformations. Leveraging a 360° volumetric capture system with rigorously time- and space-aligned multi-sensor data, our dataset provides the first registration-complete, physically consistent real-time deformation sequences. Evaluation demonstrates substantial improvements in online template-based mesh reconstruction (achieving new state-of-the-art performance) and enables effective sim-to-real transfer and physics-model–based closed-loop deformable-object control.

Technology Category

Application Category

📝 Abstract
Data-driven methods have shown great potential in solving challenging manipulation tasks; however, their application in the domain of deformable objects has been constrained, in part, by the lack of data. To address this lack, we propose PokeFlex, a dataset featuring real-world multimodal data that is paired and annotated. The modalities include 3D textured meshes, point clouds, RGB images, and depth maps. Such data can be leveraged for several downstream tasks, such as online 3D mesh reconstruction, and it can potentially enable underexplored applications such as the real-world deployment of traditional control methods based on mesh simulations. To deal with the challenges posed by real-world 3D mesh reconstruction, we leverage a professional volumetric capture system that allows complete 360{deg} reconstruction. PokeFlex consists of 18 deformable objects with varying stiffness and shapes. Deformations are generated by dropping objects onto a flat surface or by poking the objects with a robot arm. Interaction wrenches and contact locations are also reported for the latter case. Using different data modalities, we demonstrated a use case for our dataset training models that, given the novelty of the multimodal nature of Pokeflex, constitute the state-of-the-art in multi-object online template-based mesh reconstruction from multimodal data, to the best of our knowledge. We refer the reader to our website ( https://pokeflex-dataset.github.io/ ) for further demos and examples.
Problem

Research questions and friction points this paper is trying to address.

Deformable Object
Data Insufficiency
3D Shape Reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

PokeFlex Dataset
Deformable Object Modeling
Multi-modal Data Collection
🔎 Similar Papers
No similar papers found.