Achieving Limited Adaptivity for Multinomial Logistic Bandits

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multinomial logistic bandits, frequent policy updates incur prohibitive computational costs. Method: We propose two low-adaptivity algorithms—B-MNL-CB (batched updates) and RS-MNL (sparse switching)—the first to extend distributionally optimal design to the multiclass setting under both stochastic and adversarial contexts. Both integrate multinomial logit modeling, batched feedback, and adaptive switching mechanisms. Contribution/Results: B-MNL-CB achieves $ ilde{O}(sqrt{T})$ regret with only $Omega(log log T)$ policy updates; RS-MNL attains the same regret bound under adversarial contexts using only $ ilde{O}(log T)$ switches. Empirical evaluation demonstrates substantial improvements over state-of-the-art online methods.

Technology Category

Application Category

📝 Abstract
Multinomial Logistic Bandits have recently attracted much attention due to their ability to model problems with multiple outcomes. In this setting, each decision is associated with many possible outcomes, modeled using a multinomial logit function. Several recent works on multinomial logistic bandits have simultaneously achieved optimal regret and computational efficiency. However, motivated by real-world challenges and practicality, there is a need to develop algorithms with limited adaptivity, wherein we are allowed only $M$ policy updates. To address these challenges, we present two algorithms, B-MNL-CB and RS-MNL, that operate in the batched and rarely-switching paradigms, respectively. The batched setting involves choosing the $M$ policy update rounds at the start of the algorithm, while the rarely-switching setting can choose these $M$ policy update rounds in an adaptive fashion. Our first algorithm, B-MNL-CB extends the notion of distributional optimal designs to the multinomial setting and achieves $ ilde{O}(sqrt{T})$ regret assuming the contexts are generated stochastically when presented with $Ω(log log T)$ update rounds. Our second algorithm, RS-MNL works with adversarially generated contexts and can achieve $ ilde{O}(sqrt{T})$ regret with $ ilde{O}(log T)$ policy updates. Further, we conducted experiments that demonstrate that our algorithms (with a fixed number of policy updates) are extremely competitive (and often better) than several state-of-the-art baselines (which update their policy every round), showcasing the applicability of our algorithms in various practical scenarios.
Problem

Research questions and friction points this paper is trying to address.

Develop limited adaptivity algorithms for multinomial logistic bandits
Achieve optimal regret with constrained policy updates
Address batched and rarely-switching settings for practical applicability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Batched policy updates with B-MNL-CB algorithm
Rarely-switching policy updates with RS-MNL algorithm
Optimal regret with limited adaptive policy updates
🔎 Similar Papers
No similar papers found.
S
Sukruta Prakash Midigeshi
Microsoft Research India
T
Tanmay Goyal
Microsoft Research India
Gaurav Sinha
Gaurav Sinha
Principal Researcher at Microsoft Research
Causal InferenceReinforcement LearningTheoretical Computer Science