๐ค AI Summary
This work addresses the hallucination problem in large language models and multimodal large models by conceptualizing the model as a dynamical system, wherein factual knowledge is modeled as stable equilibrium points in the representation space. Leveraging Lyapunov stability theory, the authors propose a novel probing mechanism that integrates derivative-constrained training, systematic input perturbations, and a two-stage training procedure to effectively capture the modelโs confidence decay under perturbations. This approach enables the identification of boundaries in knowledge transition regions, thereby facilitating hallucination detection. Experimental results across multiple datasets and models demonstrate that the proposed method significantly outperforms existing baselines, achieving more stable and reliable hallucination identification performance.
๐ Abstract
We address hallucination detection in Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) by framing the problem through the lens of dynamical systems stability theory. Rather than treating hallucination as a straightforward classification task, we conceptualize (M)LLMs as dynamical systems, where factual knowledge is represented by stable equilibrium points within the representation space. Our main insight is that hallucinations tend to arise at the boundaries of knowledge-transition regions separating stable and unstable zones. To capture this phenomenon, we propose Lyapunov Probes: lightweight networks trained with derivative-based stability constraints that enforce a monotonic decay in confidence under input perturbations. By performing systematic perturbation analysis and applying a two-stage training process, these probes reliably distinguish between stable factual regions and unstable, hallucination-prone regions. Experiments on diverse datasets and models demonstrate consistent improvements over existing baselines.