Benchmarking Multimodal Large Language Models for Missing Modality Completion in Product Catalogues

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of incomplete multimodal information in e-commerce platforms, where missing or erroneous labels and metadata hinder downstream applications. To this end, we introduce MMPCBench, the first benchmark tailored to real-world e-commerce scenarios for multimodal completion, systematically evaluating six multimodal large language models—including Qwen2.5-VL and the Gemma-3 series—on both image-to-text and text-to-image completion tasks. We further propose Group Relative Policy Optimization (GRPO) to align model behavior with task objectives. Our findings reveal that model scale does not exhibit a straightforward positive correlation with performance; while current models excel at high-level semantic understanding, they struggle with fine-grained word- or pixel-level alignment. Moreover, GRPO significantly improves only image-to-text completion, with performance varying markedly across product categories.

Technology Category

Application Category

📝 Abstract
Missing-modality information on e-commerce platforms, such as absent product images or textual descriptions, often arises from annotation errors or incomplete metadata, impairing both product presentation and downstream applications such as recommendation systems. Motivated by the multimodal generative capabilities of recent Multimodal Large Language Models (MLLMs), this work investigates a fundamental yet underexplored question: can MLLMs generate missing modalities for products in e-commerce scenarios? We propose the Missing Modality Product Completion Benchmark (MMPCBench), which consists of two sub-benchmarks: a Content Quality Completion Benchmark and a Recommendation Benchmark. We further evaluate six state-of-the-art MLLMs from the Qwen2.5-VL and Gemma-3 model families across nine real-world e-commerce categories, focusing on image-to-text and text-to-image completion tasks. Experimental results show that while MLLMs can capture high-level semantics, they struggle with fine-grained word-level and pixel- or patch-level alignment. In addition, performance varies substantially across product categories and model scales, and we observe no trivial correlation between model size and performance, in contrast to trends commonly reported in mainstream benchmarks. We also explore Group Relative Policy Optimization (GRPO) to better align MLLMs with this task. GRPO improves image-to-text completion but does not yield gains for text-to-image completion. Overall, these findings expose the limitations of current MLLMs in real-world cross-modal generation and represent an early step toward more effective missing-modality product completion.
Problem

Research questions and friction points this paper is trying to address.

missing modality
multimodal large language models
product catalogues
e-commerce
cross-modal generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Large Language Models
Missing Modality Completion
E-commerce Product Catalogues
Cross-modal Generation
Group Relative Policy Optimization
🔎 Similar Papers
No similar papers found.