Learning Zero-Shot Material States Segmentation, by Implanting Natural Image Patterns in Synthetic Data

πŸ“… 2024-03-05
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Zero-shot material state segmentation suffers from synthetic data lacking real-world texture diversity and from costly, low-accuracy annotations of real images. Method: We propose a novel natural-pattern injection paradigm: unsupervised texture clustering and transfer embed authentic textures and patterns from natural images into SVBRDF/PBR-enhanced synthetic rendering pipelines, yielding physically accurate and visually realistic synthetic data. Contribution/Results: We establish the first cross-domain, multi-state zero-shot material state segmentation benchmark supporting partial-similarity annotations. We design a zero-shot semantic segmentation network tailored to this paradigm. On our benchmark, our method significantly outperforms mainstream foundation models and generalizes robustly across six+ domains (e.g., food, soil) and ten+ states (e.g., wet, dry, infected). We publicly release 300K textures, the full dataset, source code, and pre-trained models.

Technology Category

Application Category

πŸ“ Abstract
Visual recognition of materials and their states is essential for understanding the physical world, from identifying wet regions on surfaces or stains on fabrics to detecting infected areas on plants or minerals in rocks. Collecting data that captures this vast variability is complex due to the scattered and gradual nature of material states. Manually annotating real-world images is constrained by cost and precision, while synthetic data, although accurate and inexpensive, lacks real-world diversity. This work aims to bridge this gap by infusing patterns automatically extracted from real-world images into synthetic data. Hence, patterns collected from natural images are used to generate and map materials into synthetic scenes. This unsupervised approach captures the complexity of the real world while maintaining the precision and scalability of synthetic data. We also present the first comprehensive benchmark for zero-shot material state segmentation, utilizing real-world images across a diverse range of domains, including food, soils, construction, plants, liquids, and more, each appears in various states such as wet, dry, infected, cooked, burned, and many others. The annotation includes partial similarity between regions with similar but not identical materials and hard segmentation of only identical material states. This benchmark eluded top foundation models, exposing the limitations of existing data collection methods. Meanwhile, nets trained on the infused data performed significantly better on this and related tasks. The dataset, code, and trained model are available. We also share 300,000 extracted textures and SVBRDF/PBR materials to facilitate future datasets generation.
Problem

Research questions and friction points this paper is trying to address.

Bridging real-world diversity gap in synthetic data for material segmentation
Creating zero-shot benchmark for diverse material state segmentation
Enhancing model performance with infused real-world patterns in synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Infusing real-world patterns into synthetic data
Unsupervised approach for material segmentation
Comprehensive zero-shot material state benchmark
πŸ”Ž Similar Papers
No similar papers found.
Sagi Eppel
Sagi Eppel
University of Toronto Vector institute
Computer visionChemistryMachine learning
J
Jolina Li
University Of Toronto, Computer Science Department
M
Manuel S. Drehwald
University Of Toronto, Computer Science Department
A
AlΓ‘n Aspuru-Guzik
University Of Toronto, Vector Institute, Computer Science Department, Chemistry Department