Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LMs) exhibit a critical deficiency in falsification—i.e., autonomously generating executable counterexamples to *plausibly correct but actually incorrect* algorithmic solutions—a capability orthogonal to standard forward-solving evaluation. Existing benchmarks exclusively assess correctness verification, neglecting this essential inverse reasoning challenge. Method: We introduce REFUTE, the first dynamically updated benchmark for counterexample generation, and propose a reasoning agent framework integrating program execution feedback, multi-stage chain-of-thought reasoning, and human-in-the-loop validation. Contribution/Results: Experiments reveal that even the strongest current agent (o3-mini-high augmented with code execution feedback) achieves only 8.7% effective counterexample generation—far below its 48% forward-solving accuracy—demonstrating a severe gap in falsification capability. This work is the first to formally define, operationalize, and empirically evaluate LMs’ inverse reasoning ability, thereby establishing “falsifiability” as a foundational dimension in LM assessment and filling a key gap in the evaluation landscape.

Technology Category

Application Category

📝 Abstract
There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. Falsifying hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmarks for LMs predominantly assess their ability to generate solutions rather than challenge them. We advocate for developing benchmarks that evaluate this inverse capability - creating counterexamples for subtly incorrect solutions. To demonstrate this approach, we start with the domain of algorithmic problem solving, where counterexamples can be evaluated automatically using code execution. Specifically, we introduce REFUTE, a dynamically updating benchmark that includes recent problems and incorrect submissions from programming competitions, where human experts successfully identified counterexamples. Our analysis finds that the best reasoning agents, even OpenAI o3-mini (high) with code execution feedback, can create counterexamples for only<9% of incorrect solutions in REFUTE, even though ratings indicate its ability to solve up to 48% of these problems from scratch. We hope our work spurs progress in evaluating and enhancing LMs' ability to falsify incorrect solutions - a capability that is crucial for both accelerating research and making models self-improve through reliable reflective reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LMs' ability to falsify hypotheses
Develop benchmarks for counterexample creation
Assess LMs' reflective reasoning in algorithmic problem solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic benchmark REFUTE introduced
Evaluates counterexample creation capability
Focuses on algorithmic problem solving
🔎 Similar Papers
No similar papers found.