Quantized Inference for OneRec-V2

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of deploying low-precision quantization in traditional recommendation systems, which suffer from large variances in weights and activations as well as suboptimal hardware utilization. Focusing on the generative recommendation model OneRec-V2, this study pioneers the adaptation of mature FP8 post-training quantization techniques from large language models to large-scale recommendation systems. By analyzing the LLM-like numerical distribution characteristics of OneRec-V2, the authors develop an efficient quantization and inference optimization framework. The proposed approach substantially improves hardware utilization, achieving a 49% reduction in end-to-end inference latency and a 92% increase in throughput while preserving key recommendation metrics. These results demonstrate the significant potential of generative recommendation architectures for system-level optimization.

Technology Category

Application Category

📝 Abstract
Quantized inference has demonstrated substantial system-level benefits in large language models while preserving model quality. In contrast, reliably applying low-precision quantization to recommender systems remains challenging in industrial settings. This difficulty arises from differences in training paradigms, architectural patterns, and computational characteristics, which lead to distinct numerical behaviors in weights and activations. Traditional recommender models often exhibit high-magnitude and high-variance weights and activations, making them more sensitive to quantization-induced perturbations. In addition, recommendation workloads frequently suffer from limited hardware utilization, limiting the practical gains of low-precision computation. In this work, we revisit low-precision inference in the context of generative recommendation. Through empirical distribution analysis, we show that the weight and activation statistics of OneRec-V2 are significantly more controlled and closer to those of large language models than traditional recommendation models. Moreover, OneRec-V2 exhibits a more compute-intensive inference pattern with substantially higher hardware utilization, enabling more end-to-end throughput gains with low-precision computation. Leveraging this property, we develop a FP8 post training quantization framework and integrate it into an optimized inference infrastructure. The proposed joint optimization achieves a 49\% reduction in end-to-end inference latency and a 92\% increase in throughput. Extensive online A/B testing further confirms that FP8 inference introduces no degradation in core metrics. These results suggest that as recommender systems evolve toward the paradigms of large language models, algorithm-level and system-level optimization techniques established in the LLM domain can be effectively adapted to large-scale recommendation workloads.
Problem

Research questions and friction points this paper is trying to address.

quantized inference
recommender systems
low-precision quantization
hardware utilization
numerical stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantized inference
FP8 quantization
generative recommendation
hardware utilization
post-training quantization
🔎 Similar Papers
No similar papers found.
Y
Yi Su
Kuaishou Inc., Beijing, China
Xinchen Luo
Xinchen Luo
kuaishou
H
Hongtao Cheng
Kuaishou Inc., Beijing, China
Z
Ziteng Shu
Kuaishou Inc., Beijing, China
Yunfeng Zhao
Yunfeng Zhao
Tianjin University
Edge computing
F
Fangyu Zhang
Kuaishou Inc., Beijing, China
J
Jiaqiang Liu
Kuaishou Inc., Beijing, China
X
Xiao Liang
Kuaishou Inc., Beijing, China
Y
Yiwu Liu
Kuaishou Inc., Beijing, China
R
Ruiming Tang
Kuaishou Inc., Beijing, China