Learning to Disprove: Formal Counterexample Generation with Large Language Models

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in AI-driven mathematical reasoning by systematically investigating counterexample generation as an independent task, which has been largely overlooked in favor of theorem proving. The authors propose a symbolic mutation strategy to synthesize diverse training data and introduce an end-to-end verifiable training framework that fine-tunes large language models to produce counterexamples formally verifiable in Lean 4. Their approach integrates a multi-reward expert iteration mechanism to jointly optimize both counterexample generation and theorem-proving capabilities. Experimental results on three newly constructed benchmarks demonstrate that the proposed method significantly outperforms existing baselines, achieving substantial improvements in both the quality of generated counterexamples and training efficiency.

Technology Category

Application Category

📝 Abstract
Mathematical reasoning demands two critical, complementary skills: constructing rigorous proofs for true statements and discovering counterexamples that disprove false ones. However, current AI efforts in mathematics focus almost exclusively on proof construction, often neglecting the equally important task of finding counterexamples. In this paper, we address this gap by fine-tuning large language models (LLMs) to reason about and generate counterexamples. We formalize this task as formal counterexample generation, which requires LLMs not only to propose candidate counterexamples but also to produce formal proofs that can be automatically verified in the Lean 4 theorem prover. To enable effective learning, we introduce a symbolic mutation strategy that synthesizes diverse training data by systematically extracting theorems and discarding selected hypotheses, thereby producing diverse counterexample instances. Together with curated datasets, this strategy enables a multi-reward expert iteration framework that substantially enhances both the effectiveness and efficiency of training LLMs for counterexample generation and theorem proving. Experiments on three newly collected benchmarks validate the advantages of our approach, showing that the mutation strategy and training framework yield significant performance gains.
Problem

Research questions and friction points this paper is trying to address.

counterexample generation
mathematical reasoning
formal verification
large language models
theorem proving
Innovation

Methods, ideas, or system contributions that make the work stand out.

formal counterexample generation
large language models
symbolic mutation
Lean 4
expert iteration
🔎 Similar Papers
No similar papers found.