🤖 AI Summary
This work addresses the challenge of effectively modeling fine-grained attributes in e-commerce product understanding, a task hindered by the common practice of employing multimodal large language models solely as global feature extractors that overlook local details. To overcome this limitation, the authors propose a reasoning-aware multimodal representation learning framework that, for the first time, integrates explicit reasoning capabilities into e-commerce representation learning. The framework introduces three key innovations: adaptive multi-head modality fusion, a reasoning strategy exploration mechanism leveraging both contrastive and reinforcement learning, and a fine-grained residual enhancement module designed to preserve local information during forward propagation. Evaluated on the newly curated large-scale benchmark MBE3.0 and public datasets, the proposed method achieves state-of-the-art performance across multiple downstream tasks under zero-shot settings.
📝 Abstract
With the rapid growth of e-commerce, exploring general representations rather than task-specific ones has attracted increasing attention. Although recent multimodal large language models (MLLMs) have driven significant progress in product understanding, they are typically employed as feature extractors that implicitly encode product information into global embeddings, thereby limiting their ability to capture fine-grained attributes. Therefore, we argue that leveraging the reasoning capabilities of MLLMs to explicitly model fine-grained product attributes holds significant potential. Nevertheless, achieving this goal remains non-trivial due to several key challenges: (i) long-context reasoning tends to dilute the model's attention to salient information in the raw input; (ii) supervised fine-tuning (SFT) primarily encourages rigid imitation, limiting the exploration of effective reasoning strategies; and (iii) fine-grained details are progressively attenuated during forward propagation. To address these issues, we propose MOON3.0, the first reasoning-aware MLLM-based model for product representation learning. Our method (1) employs a multi-head modality fusion module to adaptively integrate raw signals; (2) incorporates a joint contrastive and reinforcement learning framework to autonomously explore more effective reasoning strategies; and (3) introduces a fine-grained residual enhancement module to progressively preserve local details throughout the network. Additionally, we release a large-scale multimodal e-commerce benchmark MBE3.0. Experimentally, our model demonstrates state-of-the-art zero-shot performance across various downstream tasks on both our benchmark and public datasets.