🤖 AI Summary
Ensuring mathematical rigor while leveraging large language models (LLMs) for scientific computing and theorem discovery remains challenging. Method: We propose a “human-led, AI-exploratory” dual-track paradigm: domain experts formulate problems and hypotheses, while LLMs collaboratively perform proof search, counterexample generation, candidate theorem synthesis, and constraint-satisfying structural design—integrated with numerical experimentation, formal verification prompting, manifold optimization, and cross-modal quantum algorithm modeling. Strict human–AI responsibility boundaries guarantee traceability and verifiability. Contribution/Results: Applied to the interplay between manifold optimization and Grover’s quantum search, our framework identifies an invariant subspace for the first time, discovers a Grover-compatible retraction mapping, and rigorously proves convergence of the retraction-based gradient method—thereby substantially accelerating theorem discovery and algorithm design.
📝 Abstract
We investigate how large language models can be used as research tools in scientific computing while preserving mathematical rigor. We propose a human-in-the-loop workflow for interactive theorem proving and discovery with LLMs. Human experts retain control over problem formulation and admissible assumptions, while the model searches for proofs or contradictions, proposes candidate properties and theorems, and helps construct structures and parameters that satisfy explicit constraints, supported by numerical experiments and simple verification checks. Experts treat these outputs as raw material, further refine them, and organize the results into precise statements and rigorous proofs. We instantiate this workflow in a case study on the connection between manifold optimization and Grover's quantum search algorithm, where the pipeline helps identify invariant subspaces, explore Grover-compatible retractions, and obtain convergence guarantees for the retraction-based gradient method. The framework provides a practical template for integrating large language models into frontier mathematical research, enabling faster exploration of proof space and algorithm design while maintaining transparent reasoning responsibilities. Although illustrated on manifold optimization problems in quantum computing, the principles extend to other core areas of scientific computing.