🤖 AI Summary
This work addresses the computational bottleneck of minimizing Kemeny distance—an NP-hard problem in multi-input rank aggregation—under large-scale settings. We propose Kemeny Transformer, the first end-to-end model that integrates the Transformer architecture with reinforcement learning and is directly trained to optimize the Kemeny distance. By doing so, our approach overcomes the limitations of traditional methods such as heuristics, Markov chain approximations, and integer linear programming solvers in terms of both efficiency and scalability. Evaluated on multiple benchmark datasets, Kemeny Transformer achieves significantly faster inference speeds while yielding higher-quality approximate consensus rankings, thereby enabling efficient and scalable solutions to the consensus ranking problem.
📝 Abstract
Aggregating a consensus ranking from multiple input rankings is a fundamental problem with applications in recommendation systems, search engines, job recruitment, and elections. Despite decades of research in consensus ranking aggregation, minimizing the Kemeny distance remains computationally intractable. Specifically, determining an optimal aggregation of rankings with respect to the Kemeny distance is an NP-hard problem, limiting its practical application to relatively small-scale instances. We propose the Kemeny Transformer, a novel Transformer-based algorithm trained via reinforcement learning to efficiently approximate the Kemeny optimal ranking. Experimental results demonstrate that our model outperforms classical majority-heuristic and Markov-chain approaches, achieving substantially faster inference than integer linear programming solvers. Our approach thus offers a practical, scalable alternative for real-world ranking-aggregation tasks.