Expediting Reinforcement Learning by Incorporating Knowledge About Temporal Causality in the Environment

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing probabilistic reward machines (PRMs) struggle to incorporate high-level causal knowledge for sparse-reward reinforcement learning tasks requiring complex event sequences, limiting interpretability and cross-domain transferability of reward design. Method: This paper introduces the Temporal Logic Causal Graph (TLCG) into reward modeling, enabling an interpretable and editable structured reward mechanism. We propose the TLCG-PRM framework, which semantically embeds causal graph structure into PRMs and jointly optimizes them with standard RL algorithms. Contribution/Results: We provide theoretical guarantees that the method converges to the optimal policy. Empirical evaluation across multiple complex sequential decision-making tasks demonstrates a 2.3× average acceleration in learning convergence. Moreover, the causal structure enables effective reward transfer across environments—significantly improving generalization without retraining reward models from scratch.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) algorithms struggle with learning optimal policies for tasks where reward feedback is sparse and depends on a complex sequence of events in the environment. Probabilistic reward machines (PRMs) are finite-state formalisms that can capture temporal dependencies in the reward signal, along with nondeterministic task outcomes. While special RL algorithms can exploit this finite-state structure to expedite learning, PRMs remain difficult to modify and design by hand. This hinders the already difficult tasks of utilizing high-level causal knowledge about the environment, and transferring the reward formalism into a new domain with a different causal structure. This paper proposes a novel method to incorporate causal information in the form of Temporal Logic-based Causal Diagrams into the reward formalism, thereby expediting policy learning and aiding the transfer of task specifications to new environments. Furthermore, we provide a theoretical result about convergence to optimal policy for our method, and demonstrate its strengths empirically.
Problem

Research questions and friction points this paper is trying to address.

Expediting reinforcement learning for sparse reward tasks
Incorporating temporal causality knowledge into reward formalism
Enabling transfer of task specifications across environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporating Temporal Logic-based Causal Diagrams
Expediting policy learning with causal knowledge
Transferring task specifications to new environments
🔎 Similar Papers
No similar papers found.
J
Jan Corazza
TU Dortmund University
H
Hadi Partovi Aria
Arizona State University
Daniel Neider
Daniel Neider
TU Dortmund University and Center for Trustworthy Data Science and Security
Formal MethodsMachine LearningLogicArtificial Intelligence
Z
Zhe Xu
Arizona State University