🤖 AI Summary
This work addresses the prevalent modality fragmentation problem in generative recommendation (GR), where existing methods typically rely on single-modality inputs (e.g., text only) and thus fail to capture the heterogeneous multimodal content of real-world items. We propose, for the first time, the Multimodal Generative Recommendation (MGR) paradigm. To achieve unified semantic modeling across heterogeneous modalities (e.g., image and text), we design MGR-LF++, a late-fusion framework that introduces a contrastive modality alignment mechanism and modality-specific token embeddings to enhance cross-modal semantic consistency. Built upon an autoregressive generation architecture, MGR-LF++ achieves over 20% average improvement over unimodal baselines across multiple benchmark datasets, significantly boosting both recommendation relevance and generated output quality.
📝 Abstract
Generative recommendation (GR) has become a powerful paradigm in recommendation systems that implicitly links modality and semantics to item representation, in contrast to previous methods that relied on non-semantic item identifiers in autoregressive models. However, previous research has predominantly treated modalities in isolation, typically assuming item content is unimodal (usually text). We argue that this is a significant limitation given the rich, multimodal nature of real-world data and the potential sensitivity of GR models to modality choices and usage. Our work aims to explore the critical problem of Multimodal Generative Recommendation (MGR), highlighting the importance of modality choices in GR nframeworks. We reveal that GR models are particularly sensitive to different modalities and examine the challenges in achieving effective GR when multiple modalities are available. By evaluating design strategies for effectively leveraging multiple modalities, we identify key challenges and introduce MGR-LF++, an enhanced late fusion framework that employs contrastive modality alignment and special tokens to denote different modalities, achieving a performance improvement of over 20% compared to single-modality alternatives.