🤖 AI Summary
Existing recommender systems often neglect personalized ranking—the core objective—while prevailing model architectures fail to explicitly incorporate ranking optimization signals. Method: We propose Rankformer, the first graph Transformer architecture that employs ranking loss gradients as structural priors: it encodes gradient information directly into the graph structure to guide dynamic evolution of user and item embeddings toward the ranking objective; introduces a positive-sample-aware sparse attention mechanism with linear complexity to drastically reduce computational overhead; and establishes an end-to-end ranking-oriented representation learning paradigm. Contribution/Results: Rankformer consistently outperforms state-of-the-art methods across multiple public benchmarks, achieving 10×–100× speedup in training efficiency. The implementation is publicly available.
📝 Abstract
Recommender Systems (RS) aim to generate personalized ranked lists for each user and are evaluated using ranking metrics. Although personalized ranking is a fundamental aspect of RS, this critical property is often overlooked in the design of model architectures. To address this issue, we propose Rankformer, a ranking-inspired recommendation model. The architecture of Rankformer is inspired by the gradient of the ranking objective, embodying a unique (graph) transformer architecture -- it leverages global information from all users and items to produce more informative representations and employs specific attention weights to guide the evolution of embeddings towards improved ranking performance. We further develop an acceleration algorithm for Rankformer, reducing its complexity to a linear level with respect to the number of positive instances. Extensive experimental results demonstrate that Rankformer outperforms state-of-the-art methods. The code is available at https://github.com/StupidThree/Rankformer.