🤖 AI Summary
High-order robot controllers often fail when environmental assumptions are violated, and existing formal repair methods suffer from prohibitive computational cost and poor scalability to large state spaces.
Method: We propose an LLM-guided iterative automated repair framework that translates symbolic models into natural language prompts and leverages formal verification feedback to steer large language models in generating and refining repair strategies.
Contribution/Results: This work pioneers the integration of LLMs with formal verification for controller repair. Evaluated across 12 multi-scale scenarios—spanning diverse state space sizes and task complexities—the framework achieves efficient, scalable assumption-violation repair. Experiments demonstrate substantial improvements in repair efficiency over purely formal approaches, while supporting more complex controller architectures and dynamic environment modeling.
📝 Abstract
This paper presents INPROVF, an automatic framework that combines large language models (LLMs) and formal methods to speed up the repair process of high-level robot controllers. Previous approaches based solely on formal methods are computationally expensive and cannot scale to large state spaces. In contrast, INPROVF uses LLMs to generate repair candidates, and formal methods to verify their correctness. To improve the quality of these candidates, our framework first translates the symbolic representations of the environment and controllers into natural language descriptions. If a candidate fails the verification, INPROVF provides feedback on potential unsafe behaviors or unsatisfied tasks, and iteratively prompts LLMs to generate improved solutions. We demonstrate the effectiveness of INPROVF through 12 violations with various workspaces, tasks, and state space sizes.