LLM for Complex Reasoning Task: An Exploratory Study in Fermi Problems

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fermi Problems (FPs)—ill-structured, real-world estimation tasks requiring human-like logical and numerical reasoning—serve as a critical benchmark for assessing the robustness of large language models’ (LLMs) reasoning capabilities under ambiguity and structural uncertainty. Method: We systematically evaluate LLMs on FPs by introducing the first publicly available FP benchmark and proposing a task-decomposition prompting strategy grounded in the TELeR framework; zero-shot experiments are conducted across GPT-4, Claude, and Llama. Contribution/Results: All three models achieve average FP-scores below 0.5, indicating substantial limitations in coarse-grained estimation. Crucially, “standard” FPs (e.g., “How many piano tuners are in Chicago?”) yield significantly higher accuracy and inference efficiency than “concrete” FPs (e.g., “How many tennis balls fit in a school bus?”), revealing that task structure fundamentally constrains LLMs’ numerical reasoning. This work provides empirical evidence and methodological foundations for characterizing LLMs’ boundaries in real-world approximate reasoning.

Technology Category

Application Category

📝 Abstract
Fermi Problems (FPs) are mathematical reasoning tasks that require human-like logic and numerical reasoning. Unlike other reasoning questions, FPs often involve real-world impracticalities or ambiguous concepts, making them challenging even for humans to solve. Despite advancements in AI, particularly with large language models (LLMs) in various reasoning tasks, FPs remain relatively under-explored. This work conducted an exploratory study to examine the capabilities and limitations of LLMs in solving FPs. We first evaluated the overall performance of three advanced LLMs using a publicly available FP dataset. We designed prompts according to the recently proposed TELeR taxonomy, including a zero-shot scenario. Results indicated that all three LLMs achieved a fp_score (range between 0 - 1) below 0.5, underscoring the inherent difficulty of these reasoning tasks. To further investigate, we categorized FPs into standard and specific questions, hypothesizing that LLMs would perform better on standard questions, which are characterized by clarity and conciseness, than on specific ones. Comparative experiments confirmed this hypothesis, demonstrating that LLMs performed better on standard FPs in terms of both accuracy and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' performance on Fermi Problems
Comparing LLMs' accuracy in standard vs specific FPs
Exploring limitations of LLMs in complex reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated LLMs using Fermi Problems dataset
Applied TELeR taxonomy for prompt design
Compared standard vs specific question performance
🔎 Similar Papers
No similar papers found.
Z
Zishuo Liu
MCS department, Gustavus Adolphus College
C
Carlos Rabat Villarreal
Department of CSSE, Auburn University
M
Mostafa Rahgouy
Department of CSSE, Auburn University
A
Amit Das
Department of CIS, University of North Alabama
Z
Zheng Zhang
Department of CSIS, Murray State University
C
Chang Ren
Department of CSSE, Auburn University
Dongji Feng
Dongji Feng
California State University, Monterey Bay
Information RetrievalNLUevaluation