🤖 AI Summary
Large language models (LLMs) exhibit significant limitations in multi-step reasoning QA tasks, particularly in simultaneously mastering meta-level (e.g., planning, strategy) and object-level (e.g., mathematical, symbolic) reasoning.
Method: We introduce the Franklin benchmark—the first systematic framework to disentangle and quantify these two distinct reasoning tiers. It comprises the Franklin dataset and a multi-dataset joint evaluation protocol, validated via human annotation and comparative analysis across four state-of-the-art LLMs.
Contribution/Results: Our evaluation reveals that current LLMs consistently excel at meta-level reasoning but suffer severe performance degradation on object-level reasoning—especially on Franklin’s targeted tasks—exposing fundamental deficits in precise symbolic manipulation and exact computation. Franklin thus establishes a novel, scalable paradigm for fine-grained, hierarchical reasoning assessment, enabling rigorous, decoupled evaluation of reasoning capabilities.
📝 Abstract
Large Language Models (LLMs) excel in natural language tasks but still face challenges in Question Answering (QA) tasks requiring complex, multi-step reasoning. We outline the types of reasoning required in some of these tasks, and reframe them in terms of meta-level reasoning (akin to high-level strategic reasoning or planning) and object-level reasoning (embodied in lower-level tasks such as mathematical reasoning). Franklin, a novel dataset with requirements of meta- and object-level reasoning, is introduced and used along with three other datasets to evaluate four LLMs at question answering tasks requiring multiple steps of reasoning. Results from human annotation studies suggest LLMs demonstrate meta-level reasoning with high frequency, but struggle with object-level reasoning tasks in some of the datasets used. Additionally, evidence suggests that LLMs find the object-level reasoning required for the questions in the Franklin dataset challenging, yet they do exhibit strong performance with respect to the meta-level reasoning requirements.