🤖 AI Summary
Existing evaluation methods inadequately characterize hallucination behaviors of small language models (SLLMs, <10B parameters) under factual conflict scenarios, particularly overlooking their contextual sensitivity. This work proposes OnionEval, a novel multi-layered structured evaluation framework that systematically disentangles the hierarchical nature of SLLM hallucinations—revealing their stronger factual judgment capability relative to contextual reasoning. Based on this insight, we introduce the Contextual Influence (CI) score, a metric quantifying hallucination propensity across distinct contextual abstraction levels. We further establish a comprehensive evaluation protocol incorporating controlled hallucination injection, contextual perturbation, and contrastive assessment. Experiments demonstrate that CI enables cross-model and cross-task comparable evaluation; lightweight Chain-of-Thought prompting reduces hallucination rates by 37.2% on average, substantially enhancing the practical reliability of SLLMs.
📝 Abstract
Large Language Models (LLMs) are highly capable but require significant computational resources for both training and inference. Within the LLM family, smaller models (those with fewer than 10 billion parameters) also perform well across various tasks. However, these smaller models share similar limitations to their larger counterparts, including the tendency to hallucinate. Despite the existence of many benchmarks to evaluate hallucination in LLMs, few have specifically focused on small LLMs (SLLMs). Additionally, SLLMs show widely varying performance across different benchmarks. In this paper, we introduce OnionEval, a multi-layer structured framework with a specific metric called the context-influence score (CI), designed to effectively assess the fact-conflicting hallucination tendencies of small LLMs across different contextual levels. Our experimental results reveal a key feature of SLLMs: they excel in factual analysis but face challenges with context reasoning. Further investigation shows that a simple Chain-of-Thought strategy can significantly reduce these limitations, improving the practical usefulness of SLLMs in real-world applications.