QueryBandits for Hallucination Mitigation: Exploiting Semantic Features for No-Regret Rewriting

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit hallucination, yet existing mitigation approaches predominantly rely on post-hoc filtering. Method: This paper proposes QueryBandits, the first framework to integrate context-aware multi-armed bandit algorithms into query rewriting—enabling forward, proactive intervention without fine-tuning. It constructs a reward model based on 17 semantic features and employs Thompson sampling for online policy optimization, dynamically adapting to diverse queries. Empirical analysis reveals that static rewriting exacerbates hallucination and no single rewriting strategy is universally optimal. Results: Evaluated across 13 QA benchmarks, QueryBandits achieves an 87.5% win rate over the no-rewriting baseline and outperforms zero-shot prompting by 42.6–60.3%, while substantially reducing hallucination rates.

Technology Category

Application Category

📝 Abstract
Advanced reasoning capabilities in Large Language Models (LLMs) have caused higher hallucination prevalence; yet most mitigation work focuses on after-the-fact filtering rather than shaping the queries that trigger them. We introduce QueryBandits, a bandit framework that designs rewrite strategies to maximize a reward model, that encapsulates hallucination propensity based upon the sensitivities of 17 linguistic features of the input query-and therefore, proactively steer LLMs away from generating hallucinations. Across 13 diverse QA benchmarks and 1,050 lexically perturbed queries per dataset, our top contextual QueryBandit (Thompson Sampling) achieves an 87.5% win rate over a no-rewrite baseline and also outperforms zero-shot static prompting ("paraphrase" or "expand") by 42.6% and 60.3% respectively. Therefore, we empirically substantiate the effectiveness of QueryBandits in mitigating hallucination via the intervention that takes the form of a query rewrite. Interestingly, certain static prompting strategies, which constitute a considerable number of current query rewriting literature, have a higher cumulative regret than the no-rewrite baseline, signifying that static rewrites can worsen hallucination. Moreover, we discover that the converged per-arm regression feature weight vectors substantiate that there is no single rewrite strategy optimal for all queries. In this context, guided rewriting via exploiting semantic features with QueryBandits can induce significant shifts in output behavior through forward-pass mechanisms, bypassing the need for retraining or gradient-based adaptation.
Problem

Research questions and friction points this paper is trying to address.

Mitigating LLM hallucinations via query rewriting strategies
Proactively shaping queries using semantic linguistic features
Optimizing rewrite strategies with bandit framework to reduce hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bandit framework designs rewrite strategies to maximize rewards
Uses semantic features to proactively steer LLMs from hallucinations
Contextual bandit achieves high win rate over baseline methods
🔎 Similar Papers
No similar papers found.
N
Nicole Cho
JP Morgan AI Research, New York, NY
W
William Watson
JP Morgan AI Research, New York, NY
Alec Koppel
Alec Koppel
Research Lead, JP Morgan AI Research
OptimizationMachine LearningSignal Processing
Sumitra Ganesh
Sumitra Ganesh
Research Director, J.P.Morgan AI Research
multi-agent systemsreinforcement learning
M
Manuela Veloso
JP Morgan AI Research, New York, NY