🤖 AI Summary
This study systematically investigates the robustness of large language models (LLMs) against character-level structured noise, specifically focusing on invisible Unicode control characters that induce tokenization fragmentation and low signal-to-noise ratios. We propose the first adversarial perturbation framework based on invisible Unicode characters and design a multidimensional robustness evaluation framework, enabling cross-dimensional analysis across model architectures, task types, and noise intensities. Experimental results reveal that mainstream LLMs retain substantial performance under severe character-level perturbations—demonstrating unexpected robustness. We further validate their implicit denoising capability and introduce an explicit defense strategy grounded in Unicode normalization and robust tokenization. Our findings provide interpretable, deployable insights and empirical evidence for mitigating model misuse in high-stakes applications such as online proctoring and secure assessment systems.
📝 Abstract
This work investigates the resilience of contemporary LLMs against frequent and structured character-level perturbations, specifically through the insertion of noisy characters after each input character. We introduce
ameshort{}, a practical method that inserts invisible Unicode control characters into text to discourage LLM misuse in scenarios such as online exam systems. Surprisingly, despite strong obfuscation that fragments tokenization and reduces the signal-to-noise ratio significantly, many LLMs still maintain notable performance. Through comprehensive evaluation across model-, problem-, and noise-related configurations, we examine the extent and mechanisms of this robustness, exploring both the handling of character-level tokenization and extit{implicit} versus extit{explicit} denoising mechanism hypotheses of character-level noises. We hope our findings on the low-level robustness of LLMs will shed light on the risks of their misuse and on the reliability of deploying LLMs across diverse applications.