Advancing Research via Human-AI Interactive Theorem Proving

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ensuring mathematical rigor while leveraging large language models (LLMs) for scientific computing and theorem discovery remains challenging. Method: We propose a “human-led, AI-exploratory” dual-track paradigm: domain experts formulate problems and hypotheses, while LLMs collaboratively perform proof search, counterexample generation, candidate theorem synthesis, and constraint-satisfying structural design—integrated with numerical experimentation, formal verification prompting, manifold optimization, and cross-modal quantum algorithm modeling. Strict human–AI responsibility boundaries guarantee traceability and verifiability. Contribution/Results: Applied to the interplay between manifold optimization and Grover’s quantum search, our framework identifies an invariant subspace for the first time, discovers a Grover-compatible retraction mapping, and rigorously proves convergence of the retraction-based gradient method—thereby substantially accelerating theorem discovery and algorithm design.

Technology Category

Application Category

📝 Abstract
We investigate how large language models can be used as research tools in scientific computing while preserving mathematical rigor. We propose a human-in-the-loop workflow for interactive theorem proving and discovery with LLMs. Human experts retain control over problem formulation and admissible assumptions, while the model searches for proofs or contradictions, proposes candidate properties and theorems, and helps construct structures and parameters that satisfy explicit constraints, supported by numerical experiments and simple verification checks. Experts treat these outputs as raw material, further refine them, and organize the results into precise statements and rigorous proofs. We instantiate this workflow in a case study on the connection between manifold optimization and Grover's quantum search algorithm, where the pipeline helps identify invariant subspaces, explore Grover-compatible retractions, and obtain convergence guarantees for the retraction-based gradient method. The framework provides a practical template for integrating large language models into frontier mathematical research, enabling faster exploration of proof space and algorithm design while maintaining transparent reasoning responsibilities. Although illustrated on manifold optimization problems in quantum computing, the principles extend to other core areas of scientific computing.
Problem

Research questions and friction points this paper is trying to address.

Integrating large language models into rigorous mathematical research workflows
Enabling human-AI collaboration for theorem proving and discovery
Applying interactive framework to manifold optimization and quantum computing problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-in-the-loop interactive theorem proving workflow
LLM-assisted proof search and theorem proposal
Constraint-driven structure construction with verification checks
🔎 Similar Papers
No similar papers found.
C
Chenyi Li
Beijing International Center for Mathematical Research, Peking University, Beijing, 100871, People’s Republic of China
Zhijian Lai
Zhijian Lai
Beijing International Center for Mathematical Research, Peking University, Beijing, 100871, People’s Republic of China
D
Dong An
Beijing International Center for Mathematical Research, Peking University, Beijing, 100871, People’s Republic of China
J
Jiang Hu
Yau Mathematical Sciences Center, Tsinghua University, Beijing, 100190, People’s Republic of China
Zaiwen Wen
Zaiwen Wen
Peking University
OptimizationMachine Learning