🤖 AI Summary
Interpretable policy optimization in Markov decision processes (MDPs) suffers from computational intractability and poor scalability. Method: We propose a verifiable decision-tree-based policy learning framework that formulates decision-tree policy optimization as a mixed-integer linear program (MILP). To address complexity, we design a dimensionality-reduced branch-and-bound algorithm that explicitly decouples MDP dynamic constraints from tree-structure constraints, enabling efficient parallel search while guaranteeing global optimality of the learned decision tree at each iteration. Contribution/Results: Our method achieves an order-of-magnitude speedup over state-of-the-art approaches on standard benchmarks, scales to significantly larger MDPs, and yields policies that simultaneously achieve high performance, compact representation, and strong interpretability—making them suitable for high-stakes decision-making domains.
📝 Abstract
Interpretable reinforcement learning policies are essential for high-stakes decision-making, yet optimizing decision tree policies in Markov Decision Processes (MDPs) remains challenging. We propose SPOT, a novel method for computing decision tree policies, which formulates the optimization problem as a mixed-integer linear program (MILP). To enhance efficiency, we employ a reduced-space branch-and-bound approach that decouples the MDP dynamics from tree-structure constraints, enabling efficient parallel search. This significantly improves runtime and scalability compared to previous methods. Our approach ensures that each iteration yields the optimal decision tree. Experimental results on standard benchmarks demonstrate that SPOT achieves substantial speedup and scales to larger MDPs with a significantly higher number of states. The resulting decision tree policies are interpretable and compact, maintaining transparency without compromising performance. These results demonstrate that our approach simultaneously achieves interpretability and scalability, delivering high-quality policies an order of magnitude faster than existing approaches.