🤖 AI Summary
The mechanisms by which question formulation affects the reasoning capabilities of large language models (LLMs) remain poorly understood.
Method: We systematically evaluate five state-of-the-art LLMs using a multi-format benchmark—encompassing multiple-choice, true/false, and short-answer questions—that targets quantitative and deductive reasoning. We vary question format, number of answer options, and linguistic phrasing to isolate their effects.
Contribution/Results: (1) Question format significantly impacts accuracy: multiple-choice questions yield 12.3% higher average accuracy than short-answer ones, yet correctness of intermediate reasoning steps exhibits only weak correlation with final answer accuracy (r = 0.31); (2) increasing option count or substituting neutral phrasing reduces model confidence and output consistency; (3) we empirically identify the critical phenomenon “correct reasoning ≠ correct answer,” revealing systematic failures in the answer-mapping stage of current LLMs. These findings provide empirical grounding and methodological guidance for LLM reasoning evaluation and prompt engineering optimization.
📝 Abstract
Large Language Models (LLMs) have been evaluated using diverse question types, e.g., multiple-choice, true/false, and short/long answers. This study answers an unexplored question about the impact of different question types on LLM accuracy on reasoning tasks. We investigate the performance of five LLMs on three different types of questions using quantitative and deductive reasoning tasks. The performance metrics include accuracy in the reasoning steps and choosing the final answer. Key Findings: (1) Significant differences exist in LLM performance across different question types. (2) Reasoning accuracy does not necessarily correlate with the final selection accuracy. (3) The number of options and the choice of words, influence LLM performance.