On Pareto Optimality for the Multinomial Logistic Bandit

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the Pareto-optimal trade-off in dynamic multi-armed bandit recommendation—balancing cumulative reward maximization (e.g., gold coin collection) against accurate online estimation of user choice model parameters (specifically, the Multinomial Logit or MNL model). We propose the first Pareto-optimal Upper Confidence Bound (UCB) algorithm for the multi-choice logistic regression bandit setting, incorporating a forced-exploration mechanism to jointly optimize regret and parameter estimation error in real time. Theoretically, we derive a joint information-theoretic lower bound on regret and estimation error, and prove that our algorithm simultaneously achieves sublinear regret and asymptotically consistent parameter estimation. Empirically, the method demonstrates significant improvements in long-term cumulative reward and decision interpretability on real-world datasets.

Technology Category

Application Category

📝 Abstract
We provide a new online learning algorithm for tackling the Multinomial Logit Bandit (MNL-Bandit) problem. Despite the challenges posed by the combinatorial nature of the MNL model, we develop a novel Upper Confidence Bound (UCB)-based method that achieves Pareto optimality by balancing regret minimization and estimation error of the assortment revenues and the MNL parameters. We develop theoretical guarantees characterizing the tradeoff between regret and estimation error for the MNL-Bandit problem through information-theoretic bounds, and propose a modified UCB algorithm that incorporates forced exploration to improve parameter estimation accuracy while maintaining low regret. Our analysis sheds critical insights into how to optimally balance the collected revenues and the treatment estimation in dynamic assortment optimization.
Problem

Research questions and friction points this paper is trying to address.

Multi-armed Bandit Problems
Pareto Optimality
Strategy Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polynomial Logistic Regression Bandits
Pareto Optimality
Exploration Strategy Improvement
🔎 Similar Papers
No similar papers found.