Incremental Data-Driven Policy Synthesis via Game Abstractions

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the synthesis of control strategies for unknown discrete-time stochastic systems under temporal logic specifications. Method: We propose a data-driven, incremental game abstraction framework that constructs upper and lower approximations of the state reachability set and proves their monotonic evolution. An efficient rank-based incremental algorithm is designed to update the winning region by leveraging the DAG-structured subgames, thereby avoiding full re-solving of the abstracted stochastic game. Contribution/Results: To the best of our knowledge, this is the first work to integrate incremental abstraction with stochastic game solving for online, dynamic policy adaptation. Theoretical analysis guarantees approximation convergence and reduced computational complexity. Experimental evaluation demonstrates significantly lower computational overhead compared to baseline methods, enabling real-time iterative refinement and adaptation to system evolution.

Technology Category

Application Category

📝 Abstract
We address the synthesis of control policies for unknown discrete-time stochastic dynamical systems to satisfy temporal logic objectives. We present a data-driven, abstraction-based control framework that integrates online learning with novel incremental game-solving. Under appropriate continuity assumptions, our method abstracts the system dynamics into a finite stochastic (2.5-player) game graph derived from data. Given a requirement over time on this graph, we compute the winning region -- i.e., the set of initial states from which the objective is satisfiable -- in the resulting game, together with a corresponding control policy. Our main contribution is the construction of abstractions, winning regions and control policies incrementally, as data about the system dynamics accumulates. Concretely, our algorithm refines under- and over-approximations of reachable sets for each state-action pair as new data samples arrive. These refinements induce structural modifications in the game graph abstraction -- such as the addition or removal of nodes and edges -- which in turn modify the winning region. Crucially, we show that these updates are inherently monotonic: under-approximations can only grow, over-approximations can only shrink, and the winning region can only expand. We exploit this monotonicity by defining an objective-induced ranking function on the nodes of the abstract game that increases monotonically as new data samples are incorporated. These ranks underpin our novel incremental game-solving algorithm, which employs customized gadgets (DAG-like subgames) within a rank-lifting algorithm to efficiently update the winning region. Numerical case studies demonstrate significant computational savings compared to the baseline approach, which resolves the entire game from scratch whenever new data samples arrive.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing control policies for unknown stochastic systems with temporal logic objectives
Incrementally constructing game abstractions and winning regions from accumulating data
Developing efficient incremental algorithms to update control policies with new data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incremental game-solving with DAG-like subgames
Monotonic abstraction updates via data accumulation
Rank-lifting algorithm for efficient policy updates
🔎 Similar Papers
No similar papers found.