🤖 AI Summary
This work addresses the limited capability of existing SMT/CHC solvers and first-order theorem provers in handling constraint problems involving inductively defined structures, such as algebraic data types. The paper proposes a novel neurosymbolic approach that deeply integrates large language models (LLMs) with formal constraint solvers. By leveraging structured prompts, the LLM iteratively generates auxiliary lemmas, which are then validated by the solver for both correctness and utility toward the target proof. Evaluated on a diverse benchmark suite featuring algebraic data types and recursive relations, the method solves approximately 25% more inductive proof tasks than current state-of-the-art techniques, substantially overcoming the limitations of traditional solvers in reasoning about recursive structures.
📝 Abstract
Solving constraints involving inductive (aka recursive) definitions is challenging. State-of-the-art SMT/CHC solvers and first-order logic provers provide only limited support for solving such constraints, especially when they involve, e.g., abstract data types. In this work, we leverage structured prompts to elicit Large Language Models (LLMs) to generate auxiliary lemmas that are necessary for reasoning about these inductive definitions. We further propose a neuro-symbolic approach, which synergistically integrates LLMs with constraint solvers: the LLM iteratively generates conjectures, while the solver checks their validity and usefulness for proving the goal. We evaluate our approach on a diverse benchmark suite comprising constraints originating from algebrai data types and recurrence relations. The experimental results show that our approach can improve the state-of-the-art SMT and CHC solvers, solving considerably more (around 25%) proof tasks involving inductive definitions, demonstrating its efficacy.