🤖 AI Summary
To address the challenge of modeling and optimizing reinforcement learning under non-Markovian, history-dependent reward structures, this paper proposes an automaton-guided preference learning framework. It employs Deterministic Finite Automata (DFA) to automatically generate trajectory preference signals—replacing hand-crafted reward functions—and introduces a static–dynamic dual-mode optimization mechanism enabling end-to-end policy learning without explicit reward signals, with theoretical convergence guarantees. This work is the first to directly leverage DFA structure for trajectory preference generation, unifying preference learning, inverse reinforcement learning, and iterative policy optimization. Empirical evaluation across discrete and continuous control benchmarks demonstrates significant improvements over conventional reward engineering, Linear Temporal Logic (LTL)-based planning, and reward machine approaches—yielding enhanced policy generalization, scalability, and minimal human intervention.
📝 Abstract
Reinforcement Learning (RL) in environments with complex, history-dependent reward structures poses significant challenges for traditional methods. In this work, we introduce a novel approach that leverages automaton-based feedback to guide the learning process, replacing explicit reward functions with preferences derived from a deterministic finite automaton (DFA). Unlike conventional approaches that use automata for direct reward specification, our method employs the structure of the DFA to generate preferences over trajectories that are used to learn a reward function, eliminating the need for manual reward engineering. Our framework introduces a static approach that uses the learned reward function directly for policy optimization and a dynamic approach that involves continuous refining of the reward function and policy through iterative updates until convergence.
Our experiments in both discrete and continuous environments demonstrate that our approach enables the RL agent to learn effective policies for tasks with temporal dependencies, outperforming traditional reward engineering and automaton-based baselines such as reward machines and LTL-guided methods. Our results highlight the advantages of automaton-based preferences in handling non-Markovian rewards, offering a scalable, efficient, and human-independent alternative to traditional reward modeling. We also provide a convergence guarantee showing that under standard assumptions our automaton-guided preference-based framework learns a policy that is near-optimal with respect to the true non-Markovian objective.