High Accuracy, Less Talk (HALT): Reliable LLMs through Capability-Aligned Finetuning

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address hallucination—i.e., the tendency of large language models (LLMs) to generate factually incorrect content when lacking requisite knowledge or capability—this paper proposes a novel “capability alignment” post-training paradigm. The method decomposes model responses into atomic factual segments, identifies unreliable segments via ground-truth annotations, and selectively replaces, deletes, or marks them as “Unsure from Here” using an adjustable confidence threshold, thereby enabling controllable trade-offs between correctness and completeness. Crucially, this is the first approach to explicitly align model output capability with the boundaries of verifiable knowledge, implemented via capability-aware supervised fine-tuning (SFT). Experiments on Llama3-70B demonstrate substantial improvements across diverse domains: response correctness rises from 51% to 87%, average segment-level accuracy increases by 15 percentage points, F1 score improves by 4 points, while preserving 53% of original response completeness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) currently respond to every prompt. However, they can produce incorrect answers when they lack knowledge or capability -- a problem known as hallucination. We instead propose post-training an LLM to generate content only when confident in its correctness and to otherwise (partially) abstain. Specifically, our method, HALT, produces capability-aligned post-training data that encodes what the model can and cannot reliably generate. We generate this data by splitting responses of the pretrained LLM into factual fragments (atomic statements or reasoning steps), and use ground truth information to identify incorrect fragments. We achieve capability-aligned finetuning responses by either removing incorrect fragments or replacing them with"Unsure from Here"-- according to a tunable threshold that allows practitioners to trade off response completeness and mean correctness of the response's fragments. We finetune four open-source models for biography writing, mathematics, coding, and medicine with HALT for three different trade-off thresholds. HALT effectively trades off response completeness for correctness, increasing the mean correctness of response fragments by 15% on average, while resulting in a 4% improvement in the F1 score (mean of completeness and correctness of the response) compared to the relevant baselines. By tuning HALT for highest correctness, we train a single reliable Llama3-70B model with correctness increased from 51% to 87% across all four domains while maintaining 53% of the response completeness achieved with standard finetuning.
Problem

Research questions and friction points this paper is trying to address.

Reduces LLM hallucinations by abstaining from uncertain responses
Aligns model outputs with actual knowledge via capability-aware finetuning
Balances response correctness and completeness through tunable thresholds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Capability-aligned finetuning for reliable LLMs
Generating post-training data with factual fragments
Tunable threshold for correctness-completeness trade-off
🔎 Similar Papers
No similar papers found.