π€ AI Summary
Large language models (LLMs) frequently generate hallucinations or erroneous responses when confronted with queries beyond their capability boundaries, necessitating robust task feasibility identification and proactive refusal mechanisms. Method: We systematically construct the first taxonomy of LLM task infeasibility, covering diverse hallucination scenarios, and introduce UNFEASIBLE-Benchβthe first benchmark dataset dedicated to infeasible task identification. We further propose a capability-boundary-aware refusal mechanism: a binary classification framework for task feasibility assessment, supported by high-quality refusal annotations and supervised fine-tuning (SFT) to optimize refusal policies. Contribution/Results: Experiments demonstrate substantial improvements across mainstream LLMs: +28.7% in infeasible-task recognition accuracy and +34.1% in refusal reasonableness. This work provides both theoretical foundations and empirical evidence for the safe, controllable deployment of LLMs in safety-critical applications.
π Abstract
Large language models (LLMs) have shown remarkable performance in various tasks but often fail to handle queries that exceed their knowledge and capabilities, leading to incorrect or fabricated responses. This paper addresses the need for LLMs to recognize and refuse infeasible tasks due to the required skills surpassing their capabilities. We first conceptualize infeasible tasks for LLMs and provide categorizations that cover a spectrum of related hallucinations over existing literature. We develop and benchmark a new dataset comprising diverse infeasible and feasible tasks to evaluate multiple LLMs' abilities to reject infeasible tasks. Furthermore, we explore the potential of increasing LLMs' refusal capabilities with fine-tuning. Experiments validate the effectiveness of our trained models, offering promising directions for refining the operational boundaries of LLMs in real applications.