Learning When Not to Learn: Risk-Sensitive Abstention in Bandits with Unbounded Rewards

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address irreversible errors in high-risk AI applications, this paper proposes a contextual bandit model with an explicit abstention (i.e., “opt-out”) action, enabling avoidance of catastrophic decisions without relying on external supervision. Methodologically, we formalize the first “cautious learning” framework under unbounded rewards, introduce an adaptive trust-region mechanism to dynamically determine *when not to learn*, and design a cautious exploration algorithm grounded in Lipschitz continuity assumptions. We prove that our strategy achieves sublinear regret, balancing safety guarantees with learning efficiency. Key contributions are: (1) modeling abstention as an explicit safety action within the decision space; (2) establishing the first provably safe learning paradigm for contextual decision-making without a teacher or oracle; and (3) providing the first theoretical guarantee for cautious sequential decision-making under unbounded rewards—ensuring both safety and asymptotic optimality.

Technology Category

Application Category

📝 Abstract
In high-stakes AI applications, even a single action can cause irreparable damage. However, nearly all of sequential decision-making theory assumes that all errors are recoverable (e.g., by bounding rewards). Standard bandit algorithms that explore aggressively may cause irreparable damage when this assumption fails. Some prior work avoids irreparable errors by asking for help from a mentor, but a mentor may not always be available. In this work, we formalize a model of learning with unbounded rewards without a mentor as a two-action contextual bandit with an abstain option: at each round the agent observes an input and chooses either to abstain (always 0 reward) or to commit (execute a preexisting task policy). Committing yields rewards that are upper-bounded but can be arbitrarily negative, and the commit reward is assumed Lipschitz in the input. We propose a caution-based algorithm that learns when not to learn: it chooses a trusted region and commits only where the available evidence does not already certify harm. Under these conditions and i.i.d. inputs, we establish sublinear regret guarantees, theoretically demonstrating the effectiveness of cautious exploration for deploying learning agents safely in high-stakes environments.
Problem

Research questions and friction points this paper is trying to address.

Addresses risk-sensitive abstention in bandits with unbounded negative rewards
Proposes cautious algorithm committing only when evidence certifies safety
Enables safe deployment of learning agents in high-stakes environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Abstention option in contextual bandit framework
Caution-based algorithm with trusted region
Sublinear regret with i.i.d. inputs
🔎 Similar Papers
No similar papers found.