🤖 AI Summary
Recommendation systems face challenges including deploying ultra-large-scale models, continual learning on online streaming data, adapting to heterogeneous scenarios, and meeting stringent latency and computational constraints. This paper proposes the Foundation-Expert paradigm: a centralized foundation model learns cross-scenario general-purpose representations, which are efficiently transferred to lightweight expert models via target-aware embedding and a decoupled training/inference architecture. It is the first work to deploy this paradigm at production scale—handling trillions of daily requests—enabling lifelong learning, multimodal fusion, and low-overhead knowledge transfer. Based on this, we design HyperCast, a system that rearchitects training, serving, and iterative deployment pipelines. Evaluated in Meta’s production environment—processing数千亿 requests daily—HyperCast significantly improves online metrics over single-stage baselines, while enhancing R&D efficiency and maintaining high computational resource utilization.
📝 Abstract
While scaling laws promise significant performance gains for recommender systems, efficiently deploying hyperscale models remains a major unsolved challenge. In contrast to fields where FMs are already widely adopted such as natural language processing and computer vision, progress in recommender systems is hindered by unique challenges including the need to learn from online streaming data under shifting data distributions, the need to adapt to different recommendation surfaces with a wide diversity in their downstream tasks and their input distributions, and stringent latency and computational constraints. To bridge this gap, we propose to leverage the Foundation-Expert Paradigm: a framework designed for the development and deployment of hyperscale recommendation FMs. In our approach, a central FM is trained on lifelong, cross-surface, multi-modal user data to learn generalizable knowledge. This knowledge is then efficiently transferred to various lightweight, surface-specific ``expert" models via target-aware embeddings, allowing them to adapt to local data distributions and optimization goals with minimal overhead. To meet our training, inference and development needs, we built HyperCast, a production-grade infrastructure system that re-engineers training, serving, logging and iteration to power this decoupled paradigm. Our approach is now deployed at Meta serving tens of billions of user requests daily, demonstrating online metric improvements over our previous one-stage production system while improving developer velocity and maintaining infrastructure efficiency. To the best of our knowledge, this work represents the first successful deployment of a Foundation-Expert paradigm at this scale, offering a proven, compute-efficient, and developer-friendly blueprint to realize the promise of scaling laws in recommender systems.