🤖 AI Summary
This work addresses the challenge of uniformly supporting search, recommendation, and reasoning tasks over large-scale heterogeneous product catalogs by proposing NEO, a framework that adapts decoder-only large language models into an end-to-end system capable of directly generating real products without external tools. NEO interleaves natural language with typed product identifiers (SIDs) in a unified sequence and treats SIDs as an independent modality through a language-guided mechanism. By combining staged alignment and instruction tuning, it enables controllable generation across tasks, entity types, and output formats. Experiments on a catalog containing tens of millions of products demonstrate that NEO significantly outperforms strong baselines across multiple tasks and exhibits exceptional cross-task transfer capabilities.
📝 Abstract
LLMs are increasingly applied to recommendation, retrieval, and reasoning, yet deploying a single end-to-end model that can jointly support these behaviors over large, heterogeneous catalogs remains challenging. Such systems must generate unambiguous references to real items, handle multiple entity types, and operate under strict latency and reliability constraints requirements that are difficult to satisfy with text-only generation. While tool-augmented recommender systems address parts of this problem, they introduce orchestration complexity and limit end-to-end optimization. We view this setting as an instance of a broader research problem: how to adapt LLMs to reason jointly over multiple-domain entities, users, and language in a fully self-contained manner. To this end, we introduce NEO, a framework that adapts a pre-trained decoder-only LLM into a tool-free, catalog-grounded generator. NEO represents items as SIDs and trains a single model to interleave natural language and typed item identifiers within a shared sequence. Text prompts control the task, target entity type, and output format (IDs, text, or mixed), while constrained decoding guarantees catalog-valid item generation without restricting free-form text. We refer to this instruction-conditioned controllability as language-steerability. We treat SIDs as a distinct modality and study design choices for integrating discrete entity representations into LLMs via staged alignment and instruction tuning. We evaluate NEO at scale on a real-world catalog of over 10M items across multiple media types and discovery tasks, including recommendation, search, and user understanding. In offline experiments, NEO consistently outperforms strong task-specific baselines and exhibits cross-task transfer, demonstrating a practical path toward consolidating large-scale discovery capabilities into a single language-steerable generative model.