Fine-tuning Multimodal Large Language Models for Product Bundling

📅 2024-07-16
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing product bundling recommendation methods exhibit limitations in multimodal semantic understanding and large-model knowledge utilization, particularly in modeling deep associations between non-textual modalities (e.g., images and text) and bundling logic. To address this, we propose a unified multimodal tokenization and multiple-choice question-answering reformulation framework tailored for e-commerce bundling recommendation—transforming image–text relationships into structured language tasks. We introduce, for the first time, a soft-separation token mechanism and a lightweight multimodal fusion module, coupled with a progressive decoupling fine-tuning strategy that separately optimizes bundling pattern learning and domain-specific semantic understanding. Evaluated across two domains and four benchmark datasets, our approach consistently outperforms state-of-the-art methods, achieving significant improvements in recommendation relevance, diversity, and interpretability.

Technology Category

Application Category

📝 Abstract
Recent advances in product bundling have leveraged multimodal information through sophisticated encoders, but remain constrained by limited semantic understanding and a narrow scope of knowledge. Therefore, some attempts employ In-context Learning (ICL) to explore the potential of large language models (LLMs) for their extensive knowledge and complex reasoning abilities. However, these efforts are inadequate in understanding mulitmodal data and exploiting LLMs' knowledge for product bundling. To bridge the gap, we introduce Bundle-MLLM, a novel framework that fine-tunes LLMs through a hybrid item tokenization approach within a well-designed optimization strategy. Specifically, we integrate textual, media, and relational data into a unified tokenization, introducing a soft separation token to distinguish between textual and non-textual tokens. Additionally, a streamlined yet powerful multimodal fusion module is employed to embed all non-textual features into a single, informative token, significantly boosting efficiency. To tailor product bundling tasks for LLMs, we reformulate the task as a multiple-choice question with candidate items as options. We further propose a progressive optimization strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle patterns and 2) enhancing multimodal semantic understanding specific to product bundling. Extensive experiments on four datasets across two domains demonstrate that our approach outperforms a range of state-of-the-art (SOTA) methods.
Problem

Research questions and friction points this paper is trying to address.

Product组合 Recommendation
Multimodal Information Processing
Large Language Model Integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal Learning
Efficient Module Compression
Incremental Optimization Strategy
🔎 Similar Papers
No similar papers found.