OneVision: An End-to-End Generative Framework for Multi-view E-commerce Vision Search

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address cross-view representation inconsistency and conflicting optimization objectives across stages in multi-stage cascaded architectures (MCA) for e-commerce multi-view visual search, this paper proposes OneVision—a unified end-to-end generative framework. Its core contributions are: (1) Vision-aligned residual quantization (VRQ) encoding, enabling consistent cross-view feature representation; (2) a multi-stage semantic alignment mechanism that jointly leverages visual similarity priors and user preference signals to co-optimize retrieval accuracy and personalization; and (3) semantic ID centralization modeling combined with dynamic pruning to enhance inference efficiency. Offline evaluation shows a 21% speedup in inference latency. Online A/B testing demonstrates statistically significant improvements: +2.15% click-through rate, +2.27% conversion rate, and +3.12% order volume—substantially outperforming conventional MCA-based approaches.

Technology Category

Application Category

📝 Abstract
Traditional vision search, similar to search and recommendation systems, follows the multi-stage cascading architecture (MCA) paradigm to balance efficiency and conversion. Specifically, the query image undergoes feature extraction, recall, pre-ranking, and ranking stages, ultimately presenting the user with semantically similar products that meet their preferences. This multi-view representation discrepancy of the same object in the query and the optimization objective collide across these stages, making it difficult to achieve Pareto optimality in both user experience and conversion. In this paper, an end-to-end generative framework, OneVision, is proposed to address these problems. OneVision builds on VRQ, a vision-aligned residual quantization encoding, which can align the vastly different representations of an object across multiple viewpoints while preserving the distinctive features of each product as much as possible. Then a multi-stage semantic alignment scheme is adopted to maintain strong visual similarity priors while effectively incorporating user-specific information for personalized preference generation. In offline evaluations, OneVision performs on par with online MCA, while improving inference efficiency by 21% through dynamic pruning. In A/B tests, it achieves significant online improvements: +2.15% item CTR, +2.27% CVR, and +3.12% order volume. These results demonstrate that a semantic ID centric, generative architecture can unify retrieval and personalization while simplifying the serving pathway.
Problem

Research questions and friction points this paper is trying to address.

Resolves multi-view representation discrepancies in e-commerce search
Aligns visual similarity with personalized user preferences
Unifies retrieval and personalization through generative semantic IDs
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end generative framework for multi-view search
Vision-aligned residual quantization encoding representation
Multi-stage semantic alignment with dynamic pruning
🔎 Similar Papers
No similar papers found.
Z
Zexin Zheng
Kuaishou Technology, Beijing, China
H
Huangyu Dai
Kuaishou Technology, Beijing, China
L
Lingtao Mao
Kuaishou Technology, Beijing, China
X
Xinyu Sun
Kuaishou Technology, Beijing, China
Z
Zihan Liang
Kuaishou Technology, Beijing, China
Ben Chen
Ben Chen
KuaiShou, Alibaba, HUST, WHU
MultimodalLLMGenerative RecommendationSemantic Matching
Y
Yuqing Ding
Kuaishou Technology, Beijing, China
Chenyi Lei
Chenyi Lei
Kuaishou Technology
Recommender SystemInformation RetrievalGenerative RecommendationMultimodal
W
Wenwu Ou
Kuaishou Technology, Beijing, China
H
Han Li
Kuaishou Technology, Beijing, China
Kun Gai
Kun Gai
Senior Director & Researcher, Alibaba Group
Machine LearningComputational Advertising