Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from poorly characterized, hard-to-localize, and difficult-to-mitigate hallucinations in multi-step mathematical reasoning. Method: This paper proposes a fine-grained hallucination modeling paradigm comprising: (i) a novel six-dimensional taxonomy capturing typical mathematical reasoning error patterns; (ii) a modular Fine-Grained Process Reward Model (FG-PRM) that identifies hallucination types and dynamically mitigates them at each reasoning step; and (iii) an LLM self-injection method for synthesizing high-quality, fine-grained hallucination-labeled data. Results: On GSM8K and MATH benchmarks, our approach significantly improves solution accuracy. Its fine-grained hallucination detection achieves higher F1 scores than ChatGPT-3.5 and Claude-3. Moreover, ensembling multiple expert PRMs enables precise ranking and selection of optimal solutions, empirically validating the effectiveness of process-level hallucination mitigation.

Technology Category

Application Category

📝 Abstract
Hallucinations in large language models (LLMs) pose significant challenges in tasks requiring complex multi-step reasoning, such as mathematical problem-solving. Existing approaches primarily detect the presence of hallucinations but lack a nuanced understanding of their types and manifestations. In this paper, we first introduce a comprehensive taxonomy that categorizes the common hallucinations in mathematical reasoning task into six types: fabrication, factual inconsistency, context inconsistency, instruction inconsistency, logical inconsistency, and logical error. We then propose FG-PRM (Fine-Grained Process Reward Model), an augmented model designed to detect and mitigate hallucinations in a fine-grained, step-level manner. To address the limitations of manually labeling training data, we propose an automated method for generating fine-grained hallucination data using LLMs. By injecting hallucinations into reasoning steps of correct solutions, we create a diverse and balanced synthetic dataset for training FG-PRM, which consists of six specialized Process Reward Models (PRMs), each tailored to detect a specific hallucination type. Our FG-PRM demonstrates superior performance across two key tasks: 1) Fine-grained hallucination detection: classifying hallucination types for each reasoning step; and 2) Verification: ranking multiple LLM-generated outputs to select the most accurate solution, mitigating reasoning hallucinations. Our experiments show that FG-PRM outperforms ChatGPT-3.5 and Claude-3 on fine-grained hallucination detection and substantially boosts the performance of LLMs on GSM8K and MATH benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Detecting fine-grained hallucination types in mathematical reasoning
Mitigating step-level hallucinations in language model outputs
Automating generation of labeled data for hallucination detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained step-level hallucination detection model
Automated LLM-generated hallucination data generation
Process reward model for solution ranking
🔎 Similar Papers
No similar papers found.