Disentangling Recognition and Decision Regrets in Image-Based Reinforcement Learning

📅 2024-09-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In image-based reinforcement learning, policies are typically decomposed into feature extraction (recognition) and decision-making stages, rendering them vulnerable to observation overfitting due to spurious correlations between features and tasks—and obscuring whether errors stem from representation bias or policy bias. This work formally introduces *recognition regret*, quantifying insufficient discriminative power of features (i.e., under-specialized representations), and *decision regret*, measuring excessive reliance of the policy on features (i.e., over-specialized representations). Grounded in information theory and RL theory, we establish a theoretically grounded, decomposable regret framework. Experiments on maze navigation and Atari Pong empirically validate the decoupled quantification of both regrets and reveal their distinct impacts on generalization performance. Our framework provides a novel theoretical tool and empirical methodology for diagnosing and mitigating observation overfitting in image-based RL.

Technology Category

Application Category

📝 Abstract
In image-based reinforcement learning (RL), policies usually operate in two steps: first extracting lower-dimensional features from raw images (the"recognition"step), and then taking actions based on the extracted features (the"decision"step). Extracting features that are spuriously correlated with performance or irrelevant for decision-making can lead to poor generalization performance, known as observational overfitting in image-based RL. In such cases, it can be hard to quantify how much of the error can be attributed to poor feature extraction vs. poor decision-making. To disentangle the two sources of error, we introduce the notions of recognition regret and decision regret. Using these notions, we characterize and disambiguate the two distinct causes behind observational overfitting: over-specific representations, which include features that are not needed for optimal decision-making (leading to high decision regret), vs. under-specific representations, which only include a limited set of features that were spuriously correlated with performance during training (leading to high recognition regret). Finally, we provide illustrative examples of observational overfitting due to both over-specific and under-specific representations in maze environments and the Atari game Pong.
Problem

Research questions and friction points this paper is trying to address.

Disentangling recognition and decision regrets in RL
Addressing observational overfitting in image-based RL
Quantifying errors from feature extraction vs decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces recognition and decision regrets
Disambiguates over-specific and under-specific representations
Illustrates observational overfitting in maze and Pong
🔎 Similar Papers
No similar papers found.