Evaluating Large Language Models on Solved and Unsolved Problems in Graph Theory: Implications for Computing Education

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the mathematical reasoning capabilities of large language models (LLMs) on both solved and open problems in graph theory, as well as their applicability boundaries in computational education. Employing an eight-stage evaluation protocol that simulates authentic mathematical inquiry—integrating interactive prompt engineering, a mathematical reasoning assessment framework, and expert validation—the work presents the first comparative analysis of LLM performance across these two problem categories. Results demonstrate that LLMs can generate expert-verified correct proofs for solved problems. When confronted with open problems, LLMs propose plausible exploration strategies without exhibiting hallucinations, reflecting appropriate handling of uncertainty, yet they fail to achieve substantive breakthroughs. This work thus delineates both the capabilities and limitations of LLMs in rigorous mathematical reasoning.

Technology Category

Application Category

📝 Abstract
Large Language Models are increasingly used by students to explore advanced material in computer science, including graph theory. As these tools become integrated into undergraduate and graduate coursework, it is important to understand how reliably they support mathematically rigorous thinking. This study examines the performance of a LLM on two related graph theoretic problems: a solved problem concerning the gracefulness of line graphs and an open problem for which no solution is currently known. We use an eight stage evaluation protocol that reflects authentic mathematical inquiry, including interpretation, exploration, strategy formation, and proof construction. The model performed strongly on the solved problem, producing correct definitions, identifying relevant structures, recalling appropriate results without hallucination, and constructing a valid proof confirmed by a graph theory expert. For the open problem, the model generated coherent interpretations and plausible exploratory strategies but did not advance toward a solution. It did not fabricate results and instead acknowledged uncertainty, which is consistent with the explicit prompting instructions that directed the model to avoid inventing theorems or unsupported claims. These findings indicate that LLMs can support exploration of established material but remain limited in tasks requiring novel mathematical insight or critical structural reasoning. For computing education, this distinction highlights the importance of guiding students to use LLMs for conceptual exploration while relying on independent verification and rigorous argumentation for formal problem solving.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Graph Theory
Mathematical Reasoning
Computing Education
Open Problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Graph Theory
Mathematical Reasoning
Evaluation Protocol
Computing Education
🔎 Similar Papers
No similar papers found.