🤖 AI Summary
Existing AI benchmarks predominantly evaluate answer correctness, neglecting the methodological originality and practical efficacy of solutions—thus failing to assess agents’ genuine innovation potential. Method: We propose iNovelty, the first systematic benchmark framework for evaluating AI agents’ innovation potential across 18 real-world engineering and scientific tasks. It introduces a dual-axis evaluation metric—“performance gain” and “method novelty”—and ensures assessment reliability via resource-aware filtering, expert validation, and standardized solution curation. Evaluation is conducted within the unified iGym environment to enable reproducible, long-horizon testing. Contribution/Results: Experiments reveal that while current AI agents generate methodologically novel solutions, their limited robustness prevents consistent translation into stable performance improvements—highlighting the critical need for co-optimizing innovation and practical efficacy in AI agent design.
📝 Abstract
LLMs and Agents have achieved impressive progress in code generation, mathematical reasoning, and scientific discovery. However, existing benchmarks primarily measure correctness, overlooking the diversity of methods behind solutions. True innovation depends not only on producing correct answers but also on the originality of the approach. We present InnoGym, the first benchmark and framework designed to systematically evaluate the innovation potential of AI agents. InnoGym introduces two complementary metrics: performance gain, which measures improvement over the best-known solutions, and novelty, which captures methodological differences from prior approaches. The benchmark includes 18 carefully curated tasks from real-world engineering and scientific domains, each standardized through resource filtering, evaluator validation, and solution collection. In addition, we provide iGym, a unified execution environment for reproducible and long-horizon evaluations. Extensive experiments show that while some agents produce novel approaches, their lack of robustness limits performance gains. These results highlight a key gap between creativity and effectiveness, underscoring the need for benchmarks that evaluate both.