The Unintended Trade-off of AI Alignment:Balancing Hallucination Mitigation and Safety in LLMs

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies an unrecognized negative trade-off between factual accuracy improvement (e.g., hallucination suppression) and safety alignment (e.g., refusal of harmful requests) in large language models, rooted in their shared underlying representations. To decouple this conflict, we propose a two-stage intervention: first, a sparse autoencoder disentangles “factualness” and “refusal behavior” features; second, subspace orthogonality constraints are imposed during fine-tuning to prevent knowledge updates from overwriting safety-critical directions. Our method simultaneously enhances truthfulness and safety without compromising model utility. Evaluated on AdvBench and StrongReject benchmarks, it achieves significant hallucination reduction while maintaining high refusal rates—marking the first demonstration of concurrent improvement in both dimensions.

Technology Category

Application Category

📝 Abstract
Hallucination in large language models (LLMs) has been widely studied in recent years, with progress in both detection and mitigation aimed at improving truthfulness. Yet, a critical side effect remains largely overlooked: enhancing truthfulness can negatively impact safety alignment. In this paper, we investigate this trade-off and show that increasing factual accuracy often comes at the cost of weakened refusal behavior. Our analysis reveals that this arises from overlapping components in the model that simultaneously encode hallucination and refusal information, leading alignment methods to suppress factual knowledge unintentionally. We further examine how fine-tuning on benign datasets, even when curated for safety, can degrade alignment for the same reason. To address this, we propose a method that disentangles refusal-related features from hallucination features using sparse autoencoders, and preserves refusal behavior during fine-tuning through subspace orthogonalization. This approach prevents hallucinations from increasing while maintaining safety alignment.We evaluate our method on commonsense reasoning tasks and harmful benchmarks (AdvBench and StrongReject). Results demonstrate that our approach preserves refusal behavior and task utility, mitigating the trade-off between truthfulness and safety.
Problem

Research questions and friction points this paper is trying to address.

Investigating the trade-off between hallucination mitigation and safety alignment in LLMs
Addressing how increased factual accuracy weakens refusal behavior in models
Proposing feature disentanglement to maintain safety while reducing hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangles refusal and hallucination features using sparse autoencoders
Preserves refusal behavior via subspace orthogonalization during fine-tuning
Mitigates trade-off between truthfulness and safety in LLMs
🔎 Similar Papers
2024-06-17International Conference on Computational LinguisticsCitations: 6