Learning to Trust: Bayesian Adaptation to Varying Suggester Reliability in Sequential Decision Making

πŸ“… 2025-11-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Autonomous agents operating in partially observable environments face challenges in dynamically adapting to unreliable external advisors. Method: This paper proposes a Bayesian adaptive sequential decision-making framework that models advisor reliability as a latent variable integrated into the agent’s belief state; designs a learnable β€œquery” action; and jointly optimizes trust updating and query timing within a reinforcement learning and POMDP framework. Real-time advisor type inference enables online estimation and adaptation of advice quality. Contribution/Results: To our knowledge, this is the first work to unify dynamic credibility learning, active querying decisions, and sequential planning. Experiments demonstrate robust performance under abrupt or gradual changes in advisor reliability, rapid convergence, significantly reduced redundant queries, and improved human-agent collaborative decision-making efficiency.

Technology Category

Application Category

πŸ“ Abstract
Autonomous agents operating in sequential decision-making tasks under uncertainty can benefit from external action suggestions, which provide valuable guidance but inherently vary in reliability. Existing methods for incorporating such advice typically assume static and known suggester quality parameters, limiting practical deployment. We introduce a framework that dynamically learns and adapts to varying suggester reliability in partially observable environments. First, we integrate suggester quality directly into the agent's belief representation, enabling agents to infer and adjust their reliance on suggestions through Bayesian inference over suggester types. Second, we introduce an explicit ``ask'' action allowing agents to strategically request suggestions at critical moments, balancing informational gains against acquisition costs. Experimental evaluation demonstrates robust performance across varying suggester qualities, adaptation to changing reliability, and strategic management of suggestion requests. This work provides a foundation for adaptive human-agent collaboration by addressing suggestion uncertainty in uncertain environments.
Problem

Research questions and friction points this paper is trying to address.

Dynamically adapting to varying suggester reliability in uncertain environments
Integrating Bayesian inference to adjust reliance on external suggestions
Balancing suggestion acquisition costs against informational benefits strategically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian inference dynamically learns suggester reliability
Explicit ask action strategically requests suggestions
Integrates suggester quality into belief representation
πŸ”Ž Similar Papers
2024-01-27Conference on Fairness, Accountability and TransparencyCitations: 25
2024-07-22arXiv.orgCitations: 1