🤖 AI Summary
This work addresses the challenge of detecting undesirable behaviors—such as factual inaccuracies, safety violations, toxicity, and backdoor attacks—in large language models (LLMs), without fine-tuning, supervision, or task-specific adaptation. Inspired by microsaccades in biological vision, we propose a lightweight, unsupervised self-diagnostic method that introduces fine-grained positional encoding perturbations to activate latent “self-failure alert” signals inherently embedded in pretrained LLMs. Our approach integrates positional perturbation, cross-modal analogy modeling, and unsupervised response analysis. Evaluated across multiple mainstream LLMs, it significantly improves detection rates for diverse undesirable responses while incurring negligible computational overhead. The key contribution lies in identifying and leveraging the model’s intrinsic robustness boundary as an “internal diagnostic interface,” establishing a novel paradigm for trustworthy LLM evaluation.
📝 Abstract
We draw inspiration from microsaccades, tiny involuntary eye movements that reveal hidden dynamics of human perception, to propose an analogous probing method for large language models (LLMs). Just as microsaccades expose subtle but informative shifts in vision, we show that lightweight position encoding perturbations elicit latent signals that indicate model misbehaviour. Our method requires no fine-tuning or task-specific supervision, yet detects failures across diverse settings including factuality, safety, toxicity, and backdoor attacks. Experiments on multiple state-of-the-art LLMs demonstrate that these perturbation-based probes surface misbehaviours while remaining computationally efficient. These findings suggest that pretrained LLMs already encode the internal evidence needed to flag their own failures, and that microsaccade-inspired interventions provide a pathway for detecting and mitigating undesirable behaviours.