🤖 AI Summary
To address three industrial challenges in generative recommendation (GR)—misalignment across multi-stage objectives, weak capability in complex intent reasoning, and poor scalability due to redundant multi-scenario modeling—this paper proposes a Fast-Slow Thinking dual-path architecture. The fast path employs an efficient encoder-decoder with instruction-guided retrieval (IGR) for millisecond-level inference; the slow path integrates near-online LLMs with a Q2I (query-to-intent) consistency loss to enhance world-knowledge-grounded deductive reasoning. We further introduce a “train-once, deploy-everywhere” unified optimization framework, incorporating semantic alignment, unified reward mapping, and Soft Adaptive Group Clip Policy Optimization (SA-GCPO). Experiments demonstrate that our system achieves significant gains in intent understanding accuracy while maintaining low latency, reduces multi-scenario operational costs by over 70%, and enables rapid cross-business-line adaptation.
📝 Abstract
Traditional recommendation systems suffer from inconsistency in multi-stage optimization objectives. Generative Recommendation (GR) mitigates them through an end-to-end framework; however, existing methods still rely on matching mechanisms based on inductive patterns. Although responsive, they lack the ability to uncover complex user intents that require deductive reasoning based on world knowledge. Meanwhile, LLMs show strong deep reasoning capabilities, but their latency and computational costs remain challenging for industrial applications. More critically, there are performance bottlenecks in multi-scenario scalability: as shown in Figure 1, existing solutions require independent training and deployment for each scenario, leading to low resource utilization and high maintenance costs-a challenge unaddressed in GR literature. To address these, we present OxygenREC, an industrial recommendation system that leverages Fast-Slow Thinking to deliver deep reasoning with strict latency and multi-scenario requirements of real-world environments. First, we adopt a Fast-Slow Thinking architecture. Slow thinking uses a near-line LLM pipeline to synthesize Contextual Reasoning Instructions, while fast thinking employs a high-efficiency encoder--decoder backbone for real-time generation. Second, to ensure reasoning instructions effectively enhance recommendation generation, we introduce a semantic alignment mechanism with Instruction-Guided Retrieval (IGR) to filter intent-relevant historical behaviors and use a Query-to-Item (Q2I) loss for instruction-item consistency. Finally, to resolve multi-scenario scalability, we transform scenario information into controllable instructions, using unified reward mapping and Soft Adaptive Group Clip Policy Optimization (SA-GCPO) to align policies with diverse business objectives, realizing a train-once-deploy-everywhere paradigm.