When a Reinforcement Learning Agent Encounters Unknown Unknowns

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning agents suffer from decision failure when encountering “unknown unknowns”—states entirely outside their modeled state space. To address this, we propose the Episodic Markov Decision Process with Growing Awareness (EMDP-GA), which dynamically expands an agent’s awareness domain to explicitly represent its current cognitive boundary. Our key contribution is the Non-Informative Value Expansion (NIVE) mechanism: newly discovered states are initialized with the mean Q- or V-value over the currently known domain, rigorously enforcing the knowledge-vacuum assumption; we prove that NIVE achieves asymptotically optimal regret relative to the optimal policy. Integrated with Upper-Confidence-Bound Momentum Q-learning and an online training algorithm of controllable complexity, EMDP-GA enables progressively efficient discovery of unknown states under extreme uncertainty. Experiments demonstrate asymptotically optimal regret and state-of-the-art time/space complexity.

Technology Category

Application Category

📝 Abstract
An AI agent might surprisingly find she has reached an unknown state which she has never been aware of -- an unknown unknown. We mathematically ground this scenario in reinforcement learning: an agent, after taking an action calculated from value functions $Q$ and $V$ defined on the {it {aware domain}}, reaches a state out of the domain. To enable the agent to handle this scenario, we propose an {it episodic Markov decision {process} with growing awareness} (EMDP-GA) model, taking a new {it noninformative value expansion} (NIVE) approach to expand value functions to newly aware areas: when an agent arrives at an unknown unknown, value functions $Q$ and $V$ whereon are initialised by noninformative beliefs -- the averaged values on the aware domain. This design is out of respect for the complete absence of knowledge in the newly discovered state. The upper confidence bound momentum Q-learning is then adapted to the growing awareness for training the EMDP-GA model. We prove that (1) the regret of our approach is asymptotically consistent with the state of the art (SOTA) without exposure to unknown unknowns in an extremely uncertain environment, and (2) our computational complexity and space complexity are comparable with the SOTA -- these collectively suggest that though an unknown unknown is surprising, it will be asymptotically properly discovered with decent speed and an affordable cost.
Problem

Research questions and friction points this paper is trying to address.

Handling unknown states in reinforcement learning agents
Expanding value functions to newly aware areas
Ensuring asymptotic consistency with SOTA performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Episodic Markov decision process with growing awareness
Noninformative value expansion for unknown states
Upper confidence bound momentum Q-learning adaptation
🔎 Similar Papers
No similar papers found.