CPR: Mitigating Large Language Model Hallucinations with Curative Prompt Refinement

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate factual hallucinations due to ambiguous or incomplete user prompts, undermining output reliability. This work is the first to systematically identify low-quality user-side prompting as a primary cause of hallucination. To address this, we propose a plug-and-play prompt optimization framework that employs lightweight fine-tuning of a small language model to perform prompt cleaning and intent alignment, while automatically generating structured, information-complete supplementary task descriptions—entirely without external knowledge bases. Our method significantly enhances semantic clarity and task solvability of input prompts. Extensive evaluation across multiple LLMs demonstrates over 90% win rate against baselines, substantially mitigating hallucination and improving output accuracy. The framework exhibits strong generalizability across diverse tasks and models, and enables seamless deployment with minimal computational overhead.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) highlight their fluency in generating responses to diverse prompts. However, these models sometimes generate plausible yet incorrect ``hallucinated" facts, undermining trust. A frequent but often overlooked cause of such errors is the use of poorly structured or vague prompts by users, leading LLMs to base responses on assumed rather than actual intentions. To mitigate hallucinations induced by these ill-formed prompts, we introduce Curative Prompt Refinement (CPR), a plug-and-play framework for curative prompt refinement that 1) cleans ill-formed prompts, and 2) generates additional informative task descriptions to align the intention of the user and the prompt using a fine-tuned small language model. When applied to language models, we discover that CPR significantly increases the quality of generation while also mitigating hallucination. Empirical studies show that prompts with CPR applied achieves over a 90% win rate over the original prompts without any external knowledge.
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in large language models from vague prompts
Refining ill-formed prompts to align user intentions with model responses
Improving generation quality by cleaning prompts and adding task descriptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curative Prompt Refinement framework mitigates hallucinations
Cleans prompts and generates informative task descriptions
Uses fine-tuned small language model for alignment
🔎 Similar Papers
No similar papers found.
J
Jung-Woo Shim
Department of Artificial Intelligence, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea
Yeong-Joon Ju
Yeong-Joon Ju
Korea University
Computer VisionNatural Language ProcessingXAI
J
Ji-Hoon Park
Department of Artificial Intelligence, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea
S
Seong-Whan Lee
Department of Artificial Intelligence, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea