Maximum-Entropy Exploration with Future State-Action Visitation Measures

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an intrinsic reward mechanism based on the entropy of the discounted future state-action feature visitation distribution to encourage more thorough exploration within a single trajectory in reinforcement learning. The method formulates a maximum-entropy exploration objective and leverages this distribution as the fixed point of a contraction operator, enabling efficient off-policy estimation. Theoretical analysis establishes a lower-bound relationship between the proposed intrinsic reward and the entropy of the initial trajectory’s feature distribution. Empirical results demonstrate that the approach substantially improves feature coverage, accelerates convergence for pure exploration agents, and maintains stable control performance across standard benchmark tasks.

Technology Category

Application Category

📝 Abstract
Maximum entropy reinforcement learning motivates agents to explore states and actions to maximize the entropy of some distribution, typically by providing additional intrinsic rewards proportional to that entropy function. In this paper, we study intrinsic rewards proportional to the entropy of the discounted distribution of state-action features visited during future time steps. This approach is motivated by two results. First, we show that the expected sum of these intrinsic rewards is a lower bound on the entropy of the discounted distribution of state-action features visited in trajectories starting from the initial states, which we relate to an alternative maximum entropy objective. Second, we show that the distribution used in the intrinsic reward definition is the fixed point of a contraction operator and can therefore be estimated off-policy. Experiments highlight that the new objective leads to improved visitation of features within individual trajectories, in exchange for slightly reduced visitation of features in expectation over different trajectories, as suggested by the lower bound. It also leads to improved convergence speed for learning exploration-only agents. Control performance remains similar across most methods on the considered benchmarks.
Problem

Research questions and friction points this paper is trying to address.

maximum entropy reinforcement learning
intrinsic reward
state-action visitation
exploration
feature coverage
Innovation

Methods, ideas, or system contributions that make the work stand out.

maximum entropy reinforcement learning
future state-action visitation
off-policy estimation
intrinsic reward
contraction operator
🔎 Similar Papers
No similar papers found.