Pushdown Reward Machines for Reinforcement Learning

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of conventional reward machines (RMs), which can only model regular languages and thus fail to capture complex, non-Markovian temporal behaviors. To overcome this, we propose pushdown reward machines (pdRMs), the first reward modeling framework incorporating deterministic pushdown automata—enabling recognition and reward assignment for behaviors specified by deterministic context-free languages. Methodologically, pdRMs employ a stack to encode history-dependent constraints, supporting two access modes: full-stack and bounded-top-stack inspection. We establish formal equivalence between pdRM semantics and context-free specifications, and provide theoretical analysis of learning complexity. Experiments demonstrate that RL agents equipped with pdRMs efficiently learn context-free tasks, significantly outperforming standard RMs in both expressive power and sample efficiency.

Technology Category

Application Category

📝 Abstract
Reward machines (RMs) are automata structures that encode (non-Markovian) reward functions for reinforcement learning (RL). RMs can reward any behaviour representable in regular languages and, when paired with RL algorithms that exploit RM structure, have been shown to significantly improve sample efficiency in many domains. In this work, we present pushdown reward machines (pdRMs), an extension of reward machines based on deterministic pushdown automata. pdRMs can recognize and reward temporally extended behaviours representable in deterministic context-free languages, making them more expressive than reward machines. We introduce two variants of pdRM-based policies, one which has access to the entire stack of the pdRM, and one which can only access the top $k$ symbols (for a given constant $k$) of the stack. We propose a procedure to check when the two kinds of policies (for a given environment, pdRM, and constant $k$) achieve the same optimal expected reward. We then provide theoretical results establishing the expressive power of pdRMs, and space complexity results about the proposed learning problems. Finally, we provide experimental results showing how agents can be trained to perform tasks representable in deterministic context-free languages using pdRMs.
Problem

Research questions and friction points this paper is trying to address.

Extending reward machines to handle context-free languages
Introducing stack-based policies with limited or full access
Verifying equivalence between different policy variants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pushdown reward machines extend reward machines
Based on deterministic pushdown automata structure
Recognize deterministic context-free language behaviors
🔎 Similar Papers
No similar papers found.