🤖 AI Summary
In ultra-large-scale recommendation systems (e.g., LinkedIn Feed), new users suffer from low recall quality and poor retention due to sparse social connections.
Method: This paper proposes a dual-encoder retrieval architecture grounded in a causal language model (LLaMA-3). Relying solely on textual inputs, it employs prompt engineering to quantize numerical features into embeddings, and integrates large-scale fine-tuning with model quantization to achieve billion-scale candidate retrieval within millisecond latency and high throughput.
Contribution/Results: It pioneers the adaptation of generative large language models to industrial-grade real-time retrieval, significantly improving semantic alignment between retrieval and ranking stages. Offline evaluations and online A/B tests demonstrate statistically significant gains in new-user click-through rate and 7-day retention, validating both effectiveness and deployability in production-scale environments.
📝 Abstract
In large scale recommendation systems like the LinkedIn Feed, the retrieval stage is critical for narrowing hundreds of millions of potential candidates to a manageable subset for ranking. LinkedIn's Feed serves suggested content from outside of the member's network (based on the member's topical interests), where 2000 candidates are retrieved from a pool of hundreds of millions candidate with a latency budget of a few milliseconds and inbound QPS of several thousand per second. This paper presents a novel retrieval approach that fine-tunes a large causal language model (Meta's LLaMA 3) as a dual encoder to generate high quality embeddings for both users (members) and content (items), using only textual input. We describe the end to end pipeline, including prompt design for embedding generation, techniques for fine-tuning at LinkedIn's scale, and infrastructure for low latency, cost effective online serving. We share our findings on how quantizing numerical features in the prompt enables the information to get properly encoded in the embedding, facilitating greater alignment between the retrieval and ranking layer. The system was evaluated using offline metrics and an online A/B test, which showed substantial improvements in member engagement. We observed significant gains among newer members, who often lack strong network connections, indicating that high-quality suggested content aids retention. This work demonstrates how generative language models can be effectively adapted for real time, high throughput retrieval in industrial applications.