An analysis of vision-language models for fabric retrieval

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dual challenges of scarce public data and difficulty in modeling fine-grained textile textures in zero-shot text-to-image retrieval for fabrics. We propose an automated annotation pipeline leveraging multimodal large language models (MLLMs) to jointly generate natural-language descriptions and structured attribute texts (e.g., material, weave structure, sheen). Methodologically, we systematically evaluate three vision-language models—CLIP, LAION-CLIP, and Meta’s Perception Encoder—within a unified framework, comparing retrieval performance using free-text versus structured descriptors. Results show that structured attributes significantly improve cross-modal alignment accuracy; the Perception Encoder achieves best performance due to superior feature alignment capability. However, fine-grained fabric discrimination remains challenging, underscoring the necessity of domain-specific fine-tuning. This study establishes a scalable, MLLM-driven data curation paradigm and provides an empirical benchmark for industrial-grade cross-modal textile retrieval.

Technology Category

Application Category

📝 Abstract
Effective cross-modal retrieval is essential for applications like information retrieval and recommendation systems, particularly in specialized domains such as manufacturing, where product information often consists of visual samples paired with a textual description. This paper investigates the use of Vision Language Models(VLMs) for zero-shot text-to-image retrieval on fabric samples. We address the lack of publicly available datasets by introducing an automated annotation pipeline that uses Multimodal Large Language Models (MLLMs) to generate two types of textual descriptions: freeform natural language and structured attribute-based descriptions. We produce these descriptions to evaluate retrieval performance across three Vision-Language Models: CLIP, LAION-CLIP, and Meta's Perception Encoder. Our experiments demonstrate that structured, attribute-rich descriptions significantly enhance retrieval accuracy, particularly for visually complex fabric classes, with the Perception Encoder outperforming other models due to its robust feature alignment capabilities. However, zero-shot retrieval remains challenging in this fine-grained domain, underscoring the need for domain-adapted approaches. Our findings highlight the importance of combining technical textual descriptions with advanced VLMs to optimize cross-modal retrieval in industrial applications.
Problem

Research questions and friction points this paper is trying to address.

Enhancing cross-modal retrieval for fabric samples using VLMs
Generating automated fabric descriptions via MLLMs for dataset creation
Evaluating VLMs' zero-shot performance on fine-grained fabric retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated annotation pipeline using MLLMs
Structured attribute-based descriptions enhance accuracy
Perception Encoder excels in feature alignment
🔎 Similar Papers
No similar papers found.