🤖 AI Summary
For dialogue-based recommendation in visually sensitive domains (e.g., fashion, home décor), this paper proposes a multimodal dialogue recommendation framework integrating image perception. Methodologically: (1) it introduces a novel vision knowledge self-distillation mechanism to compress image tokens while preserving discriminative visual features; (2) it designs a recommendation-oriented vision–language prompt tuning strategy to jointly model dialogue context and visual representations; (3) it constructs Reddit-Amazon, the first cross-platform visual dialogue recommendation benchmark, and proposes a multi-source vision–text alignment method. Experiments demonstrate that our approach significantly outperforms text-only and mainstream open-source multimodal baselines, achieving recommendation accuracy on par with—or exceeding—that of GPT-3.5-turbo, GPT-4o-mini, and GPT-4o. These results validate the critical role of fine-grained visual information in attribute-level personalized modeling.
📝 Abstract
Conversational recommender systems engage users in dialogues to refine their needs and provide more personalized suggestions. Although textual information suffices for many domains, visually driven categories such as fashion or home decor potentially require detailed visual information related to color, style, or design. To address this challenge, we propose LaViC (Large Vision-Language Conversational Recommendation Framework), a novel approach that integrates compact image representations into dialogue-based recommendation systems. LaViC leverages a large vision-language model in a two-stage process: (1) visual knowledge self-distillation, which condenses product images from hundreds of tokens into a small set of visual tokens in a self-distillation manner, significantly reducing computational overhead, and (2) recommendation prompt tuning, which enables the model to incorporate both dialogue context and distilled visual tokens, providing a unified mechanism for capturing textual and visual features. To support rigorous evaluation of visually-aware conversational recommendation, we construct a new dataset by aligning Reddit conversations with Amazon product listings across multiple visually oriented categories (e.g., fashion, beauty, and home). This dataset covers realistic user queries and product appearances in domains where visual details are crucial. Extensive experiments demonstrate that LaViC significantly outperforms text-only conversational recommendation methods and open-source vision-language baselines. Moreover, LaViC achieves competitive or superior accuracy compared to prominent proprietary baselines (e.g., GPT-3.5-turbo, GPT-4o-mini, and GPT-4o), demonstrating the necessity of explicitly using visual data for capturing product attributes and showing the effectiveness of our vision-language integration. Our code and dataset are available at https://github.com/jeon185/LaViC.