No One Size Fits All: QueryBandits for Hallucination Mitigation

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes QueryBandits, the first model-agnostic framework that applies contextual bandits to mitigate hallucinations in closed-source large language models (LLMs). Unlike existing approaches that rely on post-processing or parameter editing of open-source models—making them impractical for deployment—QueryBandits dynamically selects optimal query rewriting strategies (e.g., paraphrasing or expansion) via online learning, requiring only forward passes without retraining or gradient updates. Leveraging Thompson sampling, semantic feature extraction, and an empirically calibrated reward function, QueryBandits achieves a 87.5% win rate across 16 question-answering scenarios, significantly outperforming the no-rewrite baseline. It also improves over static zero-shot strategies by 42.6% (Paraphrase) and 60.3% (Expand), demonstrating that no single rewriting strategy is universally effective and that contextual information is critical for dynamic optimization.

Technology Category

Application Category

📝 Abstract
Advanced reasoning capabilities in Large Language Models (LLMs) have led to more frequent hallucinations; yet most mitigation work focuses on open-source models for post-hoc detection and parameter editing. The dearth of studies focusing on hallucinations in closed-source models is especially concerning, as they constitute the vast majority of models in institutional deployments. We introduce QueryBandits, a model-agnostic contextual bandit framework that adaptively learns online to select the optimal query-rewrite strategy by leveraging an empirically validated and calibrated reward function. Across 16 QA scenarios, our top QueryBandit (Thompson Sampling) achieves an 87.5% win rate over a No-Rewrite baseline and outperforms zero-shot static policies (e.g., Paraphrase or Expand) by 42.6% and 60.3%, respectively. Moreover, all contextual bandits outperform vanilla bandits across all datasets, with higher feature variance coinciding with greater variance in arm selection. This substantiates our finding that there is no single rewrite policy optimal for all queries. We also discover that certain static policies incur higher cumulative regret than No-Rewrite, indicating that an inflexible query-rewriting policy can worsen hallucinations. Thus, learning an online policy over semantic features with QueryBandits can shift model behavior purely through forward-pass mechanisms, enabling its use with closed-source models and bypassing the need for retraining or gradient-based adaptation.
Problem

Research questions and friction points this paper is trying to address.

hallucination mitigation
closed-source LLMs
query rewriting
adaptive policy
contextual bandits
Innovation

Methods, ideas, or system contributions that make the work stand out.

QueryBandits
hallucination mitigation
contextual bandits
query rewriting
closed-source LLMs
🔎 Similar Papers
No similar papers found.
N
Nicole Cho
JPMorgan AI Research, New York, NY, USA
W
William Watson
JPMorgan AI Research, New York, NY, USA
Alec Koppel
Alec Koppel
Research Lead, JP Morgan AI Research
OptimizationMachine LearningSignal Processing
Sumitra Ganesh
Sumitra Ganesh
Research Director, J.P.Morgan AI Research
multi-agent systemsreinforcement learning
M
Manuela Veloso
JPMorgan AI Research, New York, NY, USA