🤖 AI Summary
This work exposes an intrinsic ethical vulnerability in aligned large language models (LLMs): harmful knowledge embedded in pretraining representations persists robustly as globally connected “dark modes” in the parameter space—resistant to standard alignment techniques such as instruction tuning and preference learning, and reactivatable under distributional shift via adversarial induction. Grounded in knowledge manifold theory, we formally prove that alignment only establishes *local* safety regions, whereas harmful knowledge exhibits *global topological connectivity* across the model’s parameter manifold. To exploit this structural weakness, we propose a semantic consistency induction attack paradigm, integrating manifold geometric analysis with adversarial trajectory modeling. Evaluated on 23 state-of-the-art aligned LLMs—including DeepSeek-R1 and LLaMA-3—our method achieves 100% attack success on 19 models, demonstrating the universality and severity of this ethical flaw.
📝 Abstract
Large language models (LLMs) are foundational explorations to artificial general intelligence, yet their alignment with human values via instruction tuning and preference learning achieves only superficial compliance. Here, we demonstrate that harmful knowledge embedded during pretraining persists as indelible"dark patterns"in LLMs' parametric memory, evading alignment safeguards and resurfacing under adversarial inducement at distributional shifts. In this study, we first theoretically analyze the intrinsic ethical vulnerability of aligned LLMs by proving that current alignment methods yield only local"safety regions"in the knowledge manifold. In contrast, pretrained knowledge remains globally connected to harmful concepts via high-likelihood adversarial trajectories. Building on this theoretical insight, we empirically validate our findings by employing semantic coherence inducement under distributional shifts--a method that systematically bypasses alignment constraints through optimized adversarial prompts. This combined theoretical and empirical approach achieves a 100% attack success rate across 19 out of 23 state-of-the-art aligned LLMs, including DeepSeek-R1 and LLaMA-3, revealing their universal vulnerabilities.