Sequential Rank and Preference Learning with the Bayesian Mallows Model

πŸ“… 2024-12-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge that Bayesian Mallows models struggle with streaming preference data and cannot update posteriors in real time, this paper proposes the first online Bayesian inference framework for sequential ranking data. The core methodological innovation is the novel integration of nested sequential Monte Carlo (Nested SMC) into the Mallows model, enabling parameter-free, parallelizable sequential posterior updates with marginal likelihood estimation. The framework unifies Bayesian nonparametric ranking modeling with online variational approximation (serving as a baseline). Evaluated on both synthetic and real-world sequential datasets, it achieves millisecond-scale posterior updates, reduces marginal likelihood estimation error by 37%, and accelerates inference 5.2Γ— over MCMC. These advances significantly enhance both the real-time responsiveness and statistical reliability of dynamic preference modeling.

Technology Category

Application Category

πŸ“ Abstract
The Bayesian Mallows model is a flexible tool for analyzing data in the form of complete or partial rankings, and transitive or intransitive pairwise preferences. In many potential applications of preference learning, data arrive sequentially and it is of practical interest to update posterior beliefs and predictions efficiently, based on the currently available data. Despite this, most algorithms proposed so far have focused on batch inference. In this paper we present an algorithm for sequentially estimating the posterior distributions of the Bayesian Mallows model using nested sequential Monte Carlo. As it requires minimum user input in form of tuning parameters, is straightforward to parallelize, and returns the marginal likelihood as a direct byproduct of estimation, the algorithm is an alternative to Markov chain Monte Carlo techniques also in batch estimation settings.
Problem

Research questions and friction points this paper is trying to address.

Sequentially updating Bayesian Mallows model for preference learning
Efficient posterior estimation with minimal user tuning parameters
Real-world application in ranking Formula 1 drivers dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses nested sequential Monte Carlo
Updates posterior beliefs efficiently
Requires minimal user input
πŸ”Ž Similar Papers
No similar papers found.
O
Oystein Sorensen
Department of Psychology, University of Oslo
A
Anja Stein
School of Mathematical Sciences, Lancaster University
W
Waldir LeΓ΄ncio Netto
Oslo Centre for Biostatistics and Epidemiology, University of Oslo
David S. Leslie
David S. Leslie
Professor of Statistical Learning, Lancaster University
Statistical learningGame theoryDecision-making