Contextual bandits with entropy-based human feedback

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In contextual bandits, human preference feedback suffers from inefficiency, instability, and high sensitivity to noise. To address this, we propose an entropy-driven dynamic feedback triggering mechanism: using the model’s predictive entropy as an adaptive threshold, human annotations are solicited only under high uncertainty—thereby balancing exploration and exploitation. This is the first work to employ predictive entropy as the core criterion for feedback solicitation, enabling model-agnostic and robust control over feedback timing. The mechanism inherently tolerates low-quality feedback: on multiple benchmark tasks, it achieves full-feedback performance using only ~30% of human annotations; even when feedback quality degrades by 20%, it retains over 92% of the relative cumulative reward. Our implementation is publicly available.

Technology Category

Application Category

📝 Abstract
In recent years, preference-based human feedback mechanisms have become essential for enhancing model performance across diverse applications, including conversational AI systems such as ChatGPT. However, existing approaches often neglect critical aspects, such as model uncertainty and the variability in feedback quality. To address these challenges, we introduce an entropy-based human feedback framework for contextual bandits, which dynamically balances exploration and exploitation by soliciting expert feedback only when model entropy exceeds a predefined threshold. Our method is model-agnostic and can be seamlessly integrated with any contextual bandit agent employing stochastic policies. Through comprehensive experiments, we show that our approach achieves significant performance improvements while requiring minimal human feedback, even under conditions of suboptimal feedback quality. This work not only presents a novel strategy for feedback solicitation but also highlights the robustness and efficacy of incorporating human guidance into machine learning systems. Our code is publicly available: https://github.com/BorealisAI/CBHF
Problem

Research questions and friction points this paper is trying to address.

Enhancing model performance with feedback
Balancing exploration and exploitation dynamically
Reducing human feedback while improving accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy-based human feedback
Dynamic exploration-exploitation balance
Model-agnostic integration capability
🔎 Similar Papers
No similar papers found.