Exploring Test-time Scaling via Prediction Merging on Large-Scale Recommendation

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses low computational efficiency and insufficient output diversity during inference in large-scale recommender systems. It introduces test-time scaling—the first such application in recommendation—proposing a dual-path mechanism for generating prediction diversity: (1) heterogeneous ensemble of multiple architectures, and (2) sampling over initialization randomness of homogeneous models, integrated via a lightweight fusion strategy that incurs no additional user-side latency. Evaluated across three benchmark datasets on eight base models, the approach consistently outperforms parameter scaling under identical inference budgets, while enabling near-linear deployment speedup with increasing server count. The core contribution is the pioneering realization of efficient, scalable, and low-latency test-time diversity modeling in recommender systems—advancing inference-time adaptability without compromising real-time constraints.

Technology Category

Application Category

📝 Abstract
Inspired by the success of language models (LM), scaling up deep learning recommendation systems (DLRS) has become a recent trend in the community. All previous methods tend to scale up the model parameters during training time. However, how to efficiently utilize and scale up computational resources during test time remains underexplored, which can prove to be a scaling-efficient approach and bring orthogonal improvements in LM domains. The key point in applying test-time scaling to DLRS lies in effectively generating diverse yet meaningful outputs for the same instance. We propose two ways: One is to explore the heterogeneity of different model architectures. The other is to utilize the randomness of model initialization under a homogeneous architecture. The evaluation is conducted across eight models, including both classic and SOTA models, on three benchmarks. Sufficient evidence proves the effectiveness of both solutions. We further prove that under the same inference budget, test-time scaling can outperform parameter scaling. Our test-time scaling can also be seamlessly accelerated with the increase in parallel servers when deployed online, without affecting the inference time on the user side. Code is available.
Problem

Research questions and friction points this paper is trying to address.

Scales computational resources during test time for recommendation systems
Generates diverse outputs for same instance using model heterogeneity
Outperforms parameter scaling under same inference budget constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time scaling via prediction merging for recommendation
Utilizing model heterogeneity and initialization randomness
Parallel server acceleration without affecting user inference
🔎 Similar Papers
No similar papers found.
Fuyuan Lyu
Fuyuan Lyu
McGill University / Mila - Quebec AI Institute
Data-Centric AIData MiningLLM EvaluationInference Scaling
Z
Zhentai Chen
School of Artificial Intelligence, Shenzhen Technology University, Shenzhen, China
Jingyan Jiang
Jingyan Jiang
Shen Zhen Technology University
Test-time adaptation, Embodied AI,Machine learning system
L
Lingjie Li
School of Artificial Intelligence, Shenzhen Technology University, Shenzhen, China
X
Xing Tang
School of Artificial Intelligence, Shenzhen Technology University, Shenzhen, China
Xiuqiang He
Xiuqiang He
Distinguished Professor, Shenzhen Technology University
RecommendationOnline marketingAI applications
X
Xue Liu
MBZUAI & McGill University, Abu Dhabi, UAE