Gödel Test: Can Large Language Models Solve Easy Conjectures?

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study introduces the “Gödel Test,” the first systematic evaluation of large language models’ (LLMs) ability to produce original proofs for five newly proposed, simple yet unsolved mathematical conjectures in combinatorial optimization. Method: Leveraging a GPT-5–based architecture, the approach integrates inputs from multiple relevant papers and employs structured prompt engineering to guide the model through conjecture verification, counterexample construction, and formal proof generation. Contribution/Results: The model autonomously generates correct proofs or counterexamples without external assistance; it produces near-correct solutions on three relatively tractable problems—including one where it derives a novel approximation bound and refines the original conjecture. However, cross-paper compositional reasoning remains challenging. This work pioneers LLM evaluation at the level of original mathematical conjecture solving, providing empirical evidence that contemporary LLMs possess nascent capabilities for creative mathematical reasoning.

Technology Category

Application Category

📝 Abstract
Recent announcements from frontier AI model labs have highlighted strong results on high-school and undergraduate math competitions. Yet it remains unclear whether large language models can solve new, simple conjectures in more advanced areas of mathematics. We propose the Gödel Test: evaluating whether a model can produce correct proofs for very simple, previously unsolved conjectures. To this end, we study the performance of GPT-5 on five conjectures in combinatorial optimization. For each problem, we provided one or two source papers from which the conjecture arose, withheld our own conjecture, and then assessed the model's reasoning in detail. On the three easier problems, GPT-5 produced nearly correct solutions; for Problem 2 it even derived a different approximation guarantee that, upon checking, refuted our conjecture while providing a valid solution. The model failed on Problem 4, which required combining results from two papers. On Problem 5, a harder case without a validated conjecture, GPT-5 proposed the same algorithm we had in mind but failed in the analysis, suggesting the proof is more challenging than expected. Although our sample is small, the results point to meaningful progress on routine reasoning, occasional flashes of originality, and clear limitations when cross-paper synthesis is required. GPT-5 may represent an early step toward frontier models eventually passing the Gödel Test.
Problem

Research questions and friction points this paper is trying to address.

Evaluating if LLMs can prove simple unsolved mathematical conjectures
Testing GPT-5 on five combinatorial optimization conjectures from papers
Assessing model's reasoning ability and limitations in mathematical proofs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating GPT-5 on solving simple unsolved mathematical conjectures
Testing model performance on five combinatorial optimization problems
Assessing reasoning capabilities for proof generation and originality
🔎 Similar Papers
No similar papers found.