🤖 AI Summary
This work addresses the challenge of modeling complex user preferences in sequential recommendation, where sparse interaction data and reliance on single behavioral trajectories limit the performance of existing deep reasoning approaches. To overcome this bottleneck, the authors propose a parallel implicit reasoning framework that constructs multiple diverse reasoning paths within a continuous latent space to enhance both representational capacity and generalization. Key innovations include learnable trigger tokens that initiate parallel reasoning streams, global regularization to preserve path diversity, and a hybrid aggregation strategy for adaptive fusion of reasoning flows. Extensive experiments on three real-world datasets demonstrate that the proposed framework significantly outperforms state-of-the-art methods while maintaining real-time inference efficiency. Theoretical analysis further corroborates the effectiveness of parallel reasoning in improving model generalization.
📝 Abstract
Capturing complex user preferences from sparse behavioral sequences remains a fundamental challenge in sequential recommendation. Recent latent reasoning methods have shown promise by extending test-time computation through multi-step reasoning, yet they exclusively rely on depth-level scaling along a single trajectory, suffering from diminishing returns as reasoning depth increases. To address this limitation, we propose \textbf{Parallel Latent Reasoning (PLR)}, a novel framework that pioneers width-level computational scaling by exploring multiple diverse reasoning trajectories simultaneously. PLR constructs parallel reasoning streams through learnable trigger tokens in continuous latent space, preserves diversity across streams via global reasoning regularization, and adaptively synthesizes multi-stream outputs through mixture-of-reasoning-streams aggregation. Extensive experiments on three real-world datasets demonstrate that PLR substantially outperforms state-of-the-art baselines while maintaining real-time inference efficiency. Theoretical analysis further validates the effectiveness of parallel reasoning in improving generalization capability. Our work opens new avenues for enhancing reasoning capacity in sequential recommendation beyond existing depth scaling.