Is Large Language Model Performance on Reasoning Tasks Impacted by Different Ways Questions Are Asked?

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The mechanisms by which question formulation affects the reasoning capabilities of large language models (LLMs) remain poorly understood. Method: We systematically evaluate five state-of-the-art LLMs using a multi-format benchmark—encompassing multiple-choice, true/false, and short-answer questions—that targets quantitative and deductive reasoning. We vary question format, number of answer options, and linguistic phrasing to isolate their effects. Contribution/Results: (1) Question format significantly impacts accuracy: multiple-choice questions yield 12.3% higher average accuracy than short-answer ones, yet correctness of intermediate reasoning steps exhibits only weak correlation with final answer accuracy (r = 0.31); (2) increasing option count or substituting neutral phrasing reduces model confidence and output consistency; (3) we empirically identify the critical phenomenon “correct reasoning ≠ correct answer,” revealing systematic failures in the answer-mapping stage of current LLMs. These findings provide empirical grounding and methodological guidance for LLM reasoning evaluation and prompt engineering optimization.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have been evaluated using diverse question types, e.g., multiple-choice, true/false, and short/long answers. This study answers an unexplored question about the impact of different question types on LLM accuracy on reasoning tasks. We investigate the performance of five LLMs on three different types of questions using quantitative and deductive reasoning tasks. The performance metrics include accuracy in the reasoning steps and choosing the final answer. Key Findings: (1) Significant differences exist in LLM performance across different question types. (2) Reasoning accuracy does not necessarily correlate with the final selection accuracy. (3) The number of options and the choice of words, influence LLM performance.
Problem

Research questions and friction points this paper is trying to address.

Impact of question types on LLM reasoning accuracy
Performance variation across multiple-choice, true/false, short/long answers
Discrepancy between reasoning steps and final answer selection accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs on diverse question types
Analyzes reasoning vs final answer accuracy
Examines question wording impact on performance
🔎 Similar Papers
No similar papers found.