Factorized Transport Alignment for Multimodal and Multiview E-commerce Representation Learning

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) align only image titles with primary images, neglecting the critical semantic information conveyed by non-primary images and auxiliary textual modalities (e.g., descriptions, tags) in e-commerce scenarios—thereby limiting multimodal, multi-view representation learning. To address this, we propose Factorized Transport: a lightweight, factorized optimal transport approximation method designed for open-platform e-commerce. It enables scalable multi-view alignment—spanning primary/auxiliary images and titles/descriptions/tags—while supporting zero-overhead online inference fusion. Our approach integrates stochastic view sampling, dual-tower embedding caching, and multi-view contrastive learning. Evaluated on a million-scale industrial product dataset, it achieves a +7.9% improvement in Recall@500 over strong multimodal baselines, demonstrating both effectiveness and deployability for large-scale, real-time e-commerce search.

Technology Category

Application Category

📝 Abstract
The rapid growth of e-commerce requires robust multimodal representations that capture diverse signals from user-generated listings. Existing vision-language models (VLMs) typically align titles with primary images, i.e., single-view, but overlook non-primary images and auxiliary textual views that provide critical semantics in open marketplaces such as Etsy or Poshmark. To this end, we propose a framework that unifies multimodal and multi-view learning through Factorized Transport, a lightweight approximation of optimal transport, designed for scalability and deployment efficiency. During training, the method emphasizes primary views while stochastically sampling auxiliary ones, reducing training cost from quadratic in the number of views to constant per item. At inference, all views are fused into a single cached embedding, preserving the efficiency of two-tower retrieval with no additional online overhead. On an industrial dataset of 1M product listings and 0.3M interactions, our approach delivers consistent improvements in cross-view and query-to-item retrieval, achieving up to +7.9% Recall@500 over strong multimodal baselines. Overall, our framework bridges scalability with optimal transport-based learning, making multi-view pretraining practical for large-scale e-commerce search.
Problem

Research questions and friction points this paper is trying to address.

Aligns multimodal and multiview data for e-commerce
Reduces training cost from quadratic to constant per item
Improves retrieval performance in large-scale e-commerce search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Factorized Transport for multimodal multi-view alignment
Stochastic auxiliary view sampling reduces training cost
Fused cached embeddings maintain efficient two-tower retrieval
🔎 Similar Papers
No similar papers found.