Humanity's Last Exam

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State-of-the-art LLMs achieve near-saturation on mainstream benchmarks, failing to reflect their true capabilities at the frontier of human knowledge. Method: We introduce HLE—the first multimodal, closed-book academic benchmark explicitly designed for knowledge frontiers—comprising 3,000 expert-crafted, interdisciplinary questions (mathematics, humanities, natural sciences). HLE formally defines and realizes the “ultimate closed-book academic benchmark”: questions are non-retrievable via web search, emphasize deep conceptual understanding, and require verifiable solutions. It employs global expert collaboration, multimodal question design, fully automated scoring, and a rigorous solution-uniqueness verification protocol. Contribution/Results: Experiments reveal that SOTA LLMs exhibit significantly lower accuracy and calibration than human experts on HLE, exposing a clear capability gap. The benchmark is publicly released to shift evaluation paradigms from “memory matching” toward “genuine knowledge validation.”

Technology Category

Application Category

📝 Abstract
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Evaluation Metrics
Expert-Level Knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-Difficulty Benchmark
Interdisciplinary Questions
LLM Capability Assessment
🔎 Similar Papers
No similar papers found.