🤖 AI Summary
State-of-the-art large language models publicly failed on IMO 2025 Problem 6—a combinatorial optimization challenge—due not to insufficient mathematical knowledge, but to a fundamental deficiency in mathematical knowledge *application* mechanisms.
Method: We propose “Vibe Reasoning,” a human-AI collaborative paradigm integrating meta-prompt engineering, agent embodiment, and multi-model orchestration (GPT-5 for exploration, Gemini 3 Pro for formal proof generation, Python for symbolic execution, persistent file-based memory, and multi-step workflow management), forming a closed-loop reasoning architecture.
Contribution/Results: Our approach enables the first demonstrated evolution from problem-specific prompts to generalizable, transferable meta-prompts. Minimal human guidance suffices to activate deep deductive reasoning in LMs. We successfully derive the correct answer (2112) and produce a rigorous, human-verifiable mathematical proof—establishing a reusable methodology and systematic pathway for automated mathematical reasoning.
📝 Abstract
We introduce Vibe Reasoning, a human-AI collaborative paradigm for solving complex mathematical problems. Our key insight is that frontier AI models already possess the knowledge required to solve challenging problems -- they simply do not know how, what, or when to apply it. Vibe Reasoning transforms AI's latent potential into manifested capability through generic meta-prompts, agentic grounding, and model orchestration. We demonstrate this paradigm through IMO 2025 Problem 6, a combinatorial optimization problem where autonomous AI systems publicly reported failures. Our solution combined GPT-5's exploratory capabilities with Gemini 3 Pro's proof strengths, leveraging agentic workflows with Python code execution and file-based memory, to derive both the correct answer (2112) and a rigorous mathematical proof. Through iterative refinement across multiple attempts, we discovered the necessity of agentic grounding and model orchestration, while human prompts evolved from problem-specific hints to generic, transferable meta-prompts. We analyze why capable AI fails autonomously, how each component addresses specific failure modes, and extract principles for effective vibe reasoning. Our findings suggest that lightweight human guidance can unlock frontier models' mathematical reasoning potential. This is ongoing work; we are developing automated frameworks and conducting broader evaluations to further validate Vibe Reasoning's generality and effectiveness.