Vibe Reasoning: Eliciting Frontier AI Mathematical Capabilities -- A Case Study on IMO 2025 Problem 6

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State-of-the-art large language models publicly failed on IMO 2025 Problem 6—a combinatorial optimization challenge—due not to insufficient mathematical knowledge, but to a fundamental deficiency in mathematical knowledge *application* mechanisms. Method: We propose “Vibe Reasoning,” a human-AI collaborative paradigm integrating meta-prompt engineering, agent embodiment, and multi-model orchestration (GPT-5 for exploration, Gemini 3 Pro for formal proof generation, Python for symbolic execution, persistent file-based memory, and multi-step workflow management), forming a closed-loop reasoning architecture. Contribution/Results: Our approach enables the first demonstrated evolution from problem-specific prompts to generalizable, transferable meta-prompts. Minimal human guidance suffices to activate deep deductive reasoning in LMs. We successfully derive the correct answer (2112) and produce a rigorous, human-verifiable mathematical proof—establishing a reusable methodology and systematic pathway for automated mathematical reasoning.

Technology Category

Application Category

📝 Abstract
We introduce Vibe Reasoning, a human-AI collaborative paradigm for solving complex mathematical problems. Our key insight is that frontier AI models already possess the knowledge required to solve challenging problems -- they simply do not know how, what, or when to apply it. Vibe Reasoning transforms AI's latent potential into manifested capability through generic meta-prompts, agentic grounding, and model orchestration. We demonstrate this paradigm through IMO 2025 Problem 6, a combinatorial optimization problem where autonomous AI systems publicly reported failures. Our solution combined GPT-5's exploratory capabilities with Gemini 3 Pro's proof strengths, leveraging agentic workflows with Python code execution and file-based memory, to derive both the correct answer (2112) and a rigorous mathematical proof. Through iterative refinement across multiple attempts, we discovered the necessity of agentic grounding and model orchestration, while human prompts evolved from problem-specific hints to generic, transferable meta-prompts. We analyze why capable AI fails autonomously, how each component addresses specific failure modes, and extract principles for effective vibe reasoning. Our findings suggest that lightweight human guidance can unlock frontier models' mathematical reasoning potential. This is ongoing work; we are developing automated frameworks and conducting broader evaluations to further validate Vibe Reasoning's generality and effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Unlocking AI's latent mathematical knowledge through human-AI collaboration
Addressing AI's inability to apply existing knowledge to complex problems
Solving combinatorial optimization where autonomous AI systems previously failed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-AI collaboration with generic meta-prompts
Agentic grounding and model orchestration workflows
Combining multiple AI models for exploration and proof
🔎 Similar Papers
No similar papers found.
J
Jiaao Wu
Tsinghua University, Microsoft Research
X
Xian Zhang
Microsoft Research
F
Fan Yang
Microsoft Research
Yinpeng Dong
Yinpeng Dong
Tsinghua University
Machine LearningDeep LearningAI Safety