Constraint-Generation Policy Optimization (CGPO): Nonlinear Programming for Policy Optimization in Mixed Discrete-Continuous MDPs

📅 2024-01-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the optimization of compact, interpretable policies—such as piecewise-linear policies—for hybrid discrete-continuous Markov decision processes (DC-MDPs). We propose the first constraint-generation-based bilevel mixed-integer nonlinear programming (MINLP) framework for this problem. Our method integrates adversarial constraint generation, chance-constrained modeling, and modern nonlinear solvers (IPOPT/Knitro), yielding the first bounded policy error guarantee over infinite initial state sets and provably terminating with zero optimality gap. Key contributions are threefold: (1) theoretically, we derive tight upper bounds on policy approximation error and optimality gap; (2) practically, we enable worst-case trajectory synthesis and policy defect attribution, enhancing interpretability and counterfactual diagnostics; and (3) empirically, we validate effectiveness across inventory control, reservoir operation, and physical system control tasks—demonstrating both high-probability performance guarantees and actionable, implementable policies.

Technology Category

Application Category

📝 Abstract
We propose the Constraint-Generation Policy Optimization (CGPO) framework to optimize policy parameters within compact and interpretable policy classes for mixed discrete-continuous Markov Decision Processes (DC-MDP). CGPO can not only provide bounded policy error guarantees over an infinite range of initial states for many DC-MDPs with expressive nonlinear dynamics, but it can also provably derive optimal policies in cases where it terminates with zero error. Furthermore, CGPO can generate worst-case state trajectories to diagnose policy deficiencies and provide counterfactual explanations of optimal actions. To achieve such results, CGPO proposes a bilevel mixed-integer nonlinear optimization framework for optimizing policies in defined expressivity classes (e.g. piecewise linear) and reduces it to an optimal constraint generation methodology that adversarially generates worst-case state trajectories. Furthermore, leveraging modern nonlinear optimizers, CGPO can obtain solutions with bounded optimality gap guarantees. We handle stochastic transitions through chance constraints, providing high-probability performance guarantees. We also present a roadmap for understanding the computational complexities of different expressivity classes of policy, reward, and transition dynamics. We experimentally demonstrate the applicability of CGPO across various domains, including inventory control, management of a water reservoir system, and physics control. In summary, CGPO provides structured, compact and explainable policies with bounded performance guarantees, enabling worst-case scenario generation and counterfactual policy diagnostics.
Problem

Research questions and friction points this paper is trying to address.

Optimizes policy parameters in mixed discrete-continuous MDPs
Provides bounded policy error guarantees for nonlinear dynamics
Generates worst-case trajectories for policy diagnostics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bilevel mixed-integer nonlinear optimization framework
Optimal constraint generation methodology
Chance constraints for stochastic transitions
🔎 Similar Papers
No similar papers found.