OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations inadequately assess the creative reasoning capabilities of large language models (LLMs) in mathematical problem solving, particularly regarding exploratory, compositional, and transformative generalization. Method: We propose a three-axis evaluation framework grounded in creativity theory and introduce OMEGA—the first benchmark explicitly designed for mathematical creativity—featuring out-of-distribution test problems systematically generated via templated synthesis and rigorously validated using symbolic, numeric, and geometric methods across geometry, number theory, and algebra. Contribution/Results: Experiments reveal that state-of-the-art LLMs exhibit sharp performance degradation with increasing problem complexity; supervised fine-tuning improves exploratory generalization but fails to address fundamental limitations in compositional and transformative reasoning. This work formalizes creativity constructs into quantifiable, empirically grounded evaluation dimensions, uncovering critical bottlenecks in cross-skill integration and strategic innovation. It establishes a novel paradigm for rigorously assessing and enhancing mathematical creativity in AI systems.

Technology Category

Application Category

📝 Abstract
Recent large-scale language models (LLMs) with long Chain-of-Thought reasoning-such as DeepSeek-R1-have achieved impressive results on Olympiad-level mathematics benchmarks. However, they often rely on a narrow set of strategies and struggle with problems that require a novel way of thinking. To systematically investigate these limitations, we introduce OMEGA-Out-of-distribution Math Problems Evaluation with 3 Generalization Axes-a controlled yet diverse benchmark designed to evaluate three axes of out-of-distribution generalization, inspired by Boden's typology of creativity: (1) Exploratory-applying known problem solving skills to more complex instances within the same problem domain; (2) Compositional-combining distinct reasoning skills, previously learned in isolation, to solve novel problems that require integrating these skills in new and coherent ways; and (3) Transformative-adopting novel, often unconventional strategies by moving beyond familiar approaches to solve problems more effectively. OMEGA consists of programmatically generated training-test pairs derived from templated problem generators across geometry, number theory, algebra, combinatorics, logic, and puzzles, with solutions verified using symbolic, numerical, or graphical methods. We evaluate frontier (or top-tier) LLMs and observe sharp performance degradation as problem complexity increases. Moreover, we fine-tune the Qwen-series models across all generalization settings and observe notable improvements in exploratory generalization, while compositional generalization remains limited and transformative reasoning shows little to no improvement. By isolating and quantifying these fine-grained failures, OMEGA lays the groundwork for advancing LLMs toward genuine mathematical creativity beyond mechanical proficiency.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' ability to reason beyond standard strategies in math
Assesses exploratory, compositional, and transformative generalization in math problems
Identifies limitations in LLMs' novel problem-solving and creative reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces OMEGA benchmark for out-of-distribution math problems
Evaluates exploratory, compositional, and transformative generalization
Uses programmatically generated training-test pairs for validation