🤖 AI Summary
To address irreversible errors in high-risk AI applications, this paper proposes a contextual bandit model with an explicit abstention (i.e., “opt-out”) action, enabling avoidance of catastrophic decisions without relying on external supervision. Methodologically, we formalize the first “cautious learning” framework under unbounded rewards, introduce an adaptive trust-region mechanism to dynamically determine *when not to learn*, and design a cautious exploration algorithm grounded in Lipschitz continuity assumptions. We prove that our strategy achieves sublinear regret, balancing safety guarantees with learning efficiency. Key contributions are: (1) modeling abstention as an explicit safety action within the decision space; (2) establishing the first provably safe learning paradigm for contextual decision-making without a teacher or oracle; and (3) providing the first theoretical guarantee for cautious sequential decision-making under unbounded rewards—ensuring both safety and asymptotic optimality.
📝 Abstract
In high-stakes AI applications, even a single action can cause irreparable damage. However, nearly all of sequential decision-making theory assumes that all errors are recoverable (e.g., by bounding rewards). Standard bandit algorithms that explore aggressively may cause irreparable damage when this assumption fails. Some prior work avoids irreparable errors by asking for help from a mentor, but a mentor may not always be available. In this work, we formalize a model of learning with unbounded rewards without a mentor as a two-action contextual bandit with an abstain option: at each round the agent observes an input and chooses either to abstain (always 0 reward) or to commit (execute a preexisting task policy). Committing yields rewards that are upper-bounded but can be arbitrarily negative, and the commit reward is assumed Lipschitz in the input. We propose a caution-based algorithm that learns when not to learn: it chooses a trusted region and commits only where the available evidence does not already certify harm. Under these conditions and i.i.d. inputs, we establish sublinear regret guarantees, theoretically demonstrating the effectiveness of cautious exploration for deploying learning agents safely in high-stakes environments.