🤖 AI Summary
This work addresses the challenge of hallucination in large language models (LLMs) during generation, which is exacerbated by existing in-context learning (ICL) example selection strategies that rely primarily on surface-level similarity and exhibit limited generalization and robustness. The authors propose MB-ICL, a novel framework that, for the first time, integrates manifold learning with a class-aware prototypical mechanism into ICL example selection. By leveraging the frozen LLM’s latent representations, MB-ICL constructs local geometric structures and category prototypes in the manifold space, enabling the efficient selection of more discriminative in-context examples without any model training or parameter modification. Experiments demonstrate that MB-ICL significantly outperforms existing methods on the FEVER and HaluEval benchmarks, with particularly strong performance in dialogue and summarization tasks, while maintaining stability under temperature perturbations and across different model variants.
📝 Abstract
Large language models (LLMs) frequently generate factually incorrect or unsupported content, commonly referred to as hallucinations. Prior work has explored decoding strategies, retrieval augmentation, and supervised fine-tuning for hallucination detection, while recent studies show that in-context learning (ICL) can substantially influence factual reliability. However, existing ICL demonstration selection methods often rely on surface-level similarity heuristics and exhibit limited robustness across tasks and models. We propose MB-ICL, a manifold-based demonstration sampling framework for selecting in-context demonstrations that leverages latent representations extracted from frozen LLMs. By jointly modeling local manifold structure and class-aware prototype geometry, MB-ICL selects demonstrations based on their proximity to learned prototypes rather than lexical or embedding similarity alone. Across factual verification (FEVER) and hallucination detection (HaluEval) benchmarks, MB-ICL outperforms standard ICL selection baselines in the majority of evaluated settings, with particularly strong gains on dialogue and summarization tasks. The method remains robust under temperature perturbations and model variation, indicating improved stability compared to heuristic retrieval strategies. While lexical retrieval can remain competitive in certain question-answering regimes, our results demonstrate that manifold-based prototype selection provides a reliable and training light approach for hallucination detection without modifying LLM parameters, offering a principled direction for improved ICL demonstration selection.