GlitchMiner: Mining Glitch Tokens in Large Language Models via Gradient-based Discrete Optimization

📅 2024-10-19
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) harbor “glitch tokens”—input tokens that trigger anomalous model behavior—posing serious threats to reliability and safety. Existing detection methods rely on embedding heuristics or statistical outliers, suffering from poor generalizability and high false-negative rates. This paper proposes the first gradient-guided discrete local search framework, adopting a behavior-driven paradigm that directly optimizes prediction entropy to expose model vulnerabilities. Our method requires no architectural assumptions, embedding priors, or large-scale sampling, enabling zero-shot black-box querying. It performs efficient discrete optimization within token-level neighborhoods, ensuring cross-architecture generality, computational efficiency, and scalability. Experiments span 10 LLMs across five major model families; our approach improves detection accuracy by up to 37% and reduces the average number of queries per discovery by 62%. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Glitch tokens, inputs that trigger unpredictable or anomalous behavior in Large Language Models (LLMs), pose significant challenges to model reliability and safety. Existing detection methods primarily rely on heuristic embedding patterns or statistical anomalies within internal representations, limiting their generalizability across different model architectures and potentially missing anomalies that deviate from observed patterns. We introduce GlitchMiner, an behavior-driven framework designed to identify glitch tokens by maximizing predictive entropy. Leveraging a gradient-guided local search strategy, GlitchMiner efficiently explores the discrete token space without relying on model-specific heuristics or large-batch sampling. Extensive experiments across ten LLMs from five major model families demonstrate that GlitchMiner consistently outperforms existing approaches in detection accuracy and query efficiency, providing a generalizable and scalable solution for effective glitch token discovery. Code is available at [https://github.com/wooozihu/GlitchMiner]
Problem

Research questions and friction points this paper is trying to address.

Identifying glitch tokens that trigger unpredictable behavior in LLMs
Overcoming limitations of heuristic-based detection methods across architectures
Developing efficient gradient-based optimization for glitch token discovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses gradient-guided discrete optimization for token discovery
Maximizes predictive entropy to identify glitch tokens
Operates without model-specific heuristics or large-batch sampling
Zihui Wu
Zihui Wu
PhD student, California Institute of Technology
Computational imaging
H
Haichang Gao
School of Computer Science and Technology, Xidian University
P
Ping Wang
School of Computer Science and Technology, Xidian University
S
Shudong Zhang
School of Computer Science and Technology, Xidian University
Zhaoxiang Liu
Zhaoxiang Liu
China Unicom
Computer VisionDeep LearningRoboticsHuman-Computer Interaction
Shiguo Lian
Shiguo Lian
CloudMinds