🤖 AI Summary
This study systematically evaluates the OpenAI o1-preview model’s capabilities on complex, cross-disciplinary reasoning tasks—spanning computer science, mathematics, medicine, and linguistics—to assess critical progress toward and bottlenecks hindering artificial general intelligence (AGI).
Method: We adopt a multidisciplinary, multi-granularity evaluation framework integrating standardized benchmarks, domain-expert blind evaluation, human-in-the-loop assessment, and interpretability analysis across high-stakes scenarios: programming competitions, mathematical theorem proving, radiology report generation, chip EDA script synthesis, and financial modeling.
Contribution/Results: We empirically identify a “strong reasoning emergence” phenomenon: the model achieves 83.3% accuracy on competitive programming tasks, 100% on high-school-level mathematical reasoning, and surpasses state-of-the-art models in radiological diagnosis and chip design. Its cross-domain performance approaches or matches that of human experts—providing pivotal empirical evidence and a novel evaluation paradigm for large language models advancing toward AGI.
📝 Abstract
This comprehensive study evaluates the performance of OpenAI's o1-preview large language model across a diverse array of complex reasoning tasks, spanning multiple domains, including computer science, mathematics, natural sciences, medicine, linguistics, and social sciences. Through rigorous testing, o1-preview demonstrated remarkable capabilities, often achieving human-level or superior performance in areas ranging from coding challenges to scientific reasoning and from language processing to creative problem-solving. Key findings include: -83.3% success rate in solving complex competitive programming problems, surpassing many human experts. -Superior ability in generating coherent and accurate radiology reports, outperforming other evaluated models. -100% accuracy in high school-level mathematical reasoning tasks, providing detailed step-by-step solutions. -Advanced natural language inference capabilities across general and specialized domains like medicine. -Impressive performance in chip design tasks, outperforming specialized models in areas such as EDA script generation and bug analysis. -Remarkable proficiency in anthropology and geology, demonstrating deep understanding and reasoning in these specialized fields. -Strong capabilities in quantitative investing. O1 has comprehensive financial knowledge and statistical modeling skills. -Effective performance in social media analysis, including sentiment analysis and emotion recognition. The model excelled particularly in tasks requiring intricate reasoning and knowledge integration across various fields. While some limitations were observed, including occasional errors on simpler problems and challenges with certain highly specialized concepts, the overall results indicate significant progress towards artificial general intelligence.