Evaluating LLMs' Mathematical Reasoning in Financial Document Question Answering

📅 2024-02-17
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 26
Influential: 2
📄 PDF
🤖 AI Summary
This study investigates the capability of large language models (LLMs) to perform complex arithmetic reasoning over semi-structured financial documents containing both tabular and textual content. To address this challenge, we propose a novel multimodal prompting framework—evaluated on four benchmark datasets (TATQA, FinQA, ConvFinQA, and MultiHiertt)—that integrates chain-of-thought reasoning, table compression, and stepwise verification. We further introduce a fine-grained evaluation protocol to rigorously assess multi-step numerical reasoning and cross-modal comprehension. Experiments across mainstream LLMs (e.g., GPT-4, Llama-3) reveal a pronounced performance degradation with increasing arithmetic steps and table structural complexity. Our method achieves a 5.2% absolute accuracy gain on FinQA and establishes state-of-the-art robustness on ConvFinQA. Critically, this work provides the first quantitative characterization of LLMs’ fundamental limits in multi-step arithmetic reasoning and hybrid modality understanding.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs), excel in natural language understanding, but their capability for complex mathematical reasoning with an amalgamation of structured tables and unstructured text is uncertain. This study explores LLMs' mathematical reasoning on four financial tabular question-answering datasets: TATQA, FinQA, ConvFinQA, and Multihiertt. Through extensive experiments with various models and prompting techniques, we assess how LLMs adapt to complex tables and mathematical tasks. We focus on sensitivity to table complexity and performance variations with an increasing number of arithmetic reasoning steps. The results provide insights into LLMs' capabilities and limitations in handling complex mathematical scenarios for semi-structured tables. Ultimately, we introduce a novel prompting technique tailored to semi-structured documents, matching or outperforming other baselines in performance while providing a nuanced understanding of LLMs abilities for such a task.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' mathematical reasoning on financial tabular QA datasets
Assessing sensitivity to table complexity and arithmetic reasoning steps
Developing novel prompting techniques for semi-structured financial documents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel prompting technique for semi-structured documents
Evaluates LLMs on financial tabular QA datasets
Assesses mathematical reasoning with table complexity
🔎 Similar Papers
No similar papers found.