VLM4Rec: Multimodal Semantic Representation for Recommendation with Large Vision-Language Models

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of traditional multimodal recommendation systems, which rely on raw visual features and struggle to capture high-level semantic attributes—such as style, material, or usage context—that critically influence user preferences. To overcome this, the authors propose VLM4Rec, a framework that leverages large vision-language models to translate product images into natural language descriptions and subsequently encodes them into preference-aligned semantic representations. By eschewing complex feature fusion schemes, VLM4Rec constructs a lightweight semantic space aligned with user profiles. The approach enables efficient decoupled offline-online recommendation and consistently outperforms baselines using raw visual features or various fusion strategies across multiple multimodal datasets, demonstrating the efficacy and superiority of explicit semantic representations in recommendation tasks.

Technology Category

Application Category

📝 Abstract
Multimodal recommendation is commonly framed as a feature fusion problem, where textual and visual signals are combined to better model user preference. However, the effectiveness of multimodal recommendation may depend not only on how modalities are fused, but also on whether item content is represented in a semantic space aligned with preference matching. This issue is particularly important because raw visual features often preserve appearance similarity, while user decisions are typically driven by higher-level semantic factors such as style, material, and usage context. Motivated by this observation, we propose LVLM-grounded Multimodal Semantic Representation for Recommendation (VLM4Rec), a lightweight framework that organizes multimodal item content through semantic alignment rather than direct feature fusion. VLM4Rec first uses a large vision-language model to ground each item image into an explicit natural-language description, and then encodes the grounded semantics into dense item representations for preference-oriented retrieval. Recommendation is subsequently performed through a simple profile-based semantic matching mechanism over historical item embeddings, yielding a practical offline-online decomposition. Extensive experiments on multiple multimodal recommendation datasets show that VLM4Rec consistently improves performance over raw visual features and several fusion-based alternatives, suggesting that representation quality may matter more than fusion complexity in this setting. The code is released at https://github.com/tyvalencia/enhancing-mm-rec-sys.
Problem

Research questions and friction points this paper is trying to address.

multimodal recommendation
semantic representation
vision-language models
preference alignment
item representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language models
semantic representation
multimodal recommendation
preference alignment
feature grounding
🔎 Similar Papers
No similar papers found.
T
Ty Valencia
University of Southern California
B
Burak Barlas
University of Southern California
V
Varun Singhal
University of Southern California
R
Ruchir Bhatia
University of Southern California
Wei Yang
Wei Yang
Southern Medical University, Guangzhou, China
Medical Image AnalysisMachine Learning