MEEA: Mere Exposure Effect-Driven Confrontational Optimization for LLM Jailbreaking

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety alignment research predominantly relies on static boundary assumptions, neglecting the behavioral dynamics induced by contextual interactions—thereby limiting robustness and generalization. Method: We propose the first multi-round adversarial evaluation framework integrating psychology’s “mere-exposure effect,” employing semantically progressive, low-toxicity prompt chains to repeatedly expose black-box models and dynamically lower their safety thresholds. Our approach combines simulated annealing optimization, joint toxicity-similarity guidance, and strategic multi-turn interaction. Results: Experiments across major models—including GPT-4, Claude-3.5, and DeepSeek-R1—demonstrate an average attack success rate improvement exceeding 20% over seven baseline methods. This work provides the first empirical evidence of historical dependence and dynamic evolution in LLM safety behavior, challenging the static alignment paradigm and extending it toward context-aware, adaptive safety modeling.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has intensified concerns about the robustness of their safety alignment. While existing jailbreak studies explore both single-turn and multi-turn strategies, most implicitly assume a static safety boundary and fail to account for how contextual interactions dynamically influence model behavior, leading to limited stability and generalization. Motivated by this gap, we propose MEEA (Mere Exposure Effect Attack), a psychology-inspired, fully automated black-box framework for evaluating multi-turn safety robustness, grounded in the mere exposure effect. MEEA leverages repeated low-toxicity semantic exposure to induce a gradual shift in a model's effective safety threshold, enabling progressive erosion of alignment constraints over sustained interactions. Concretely, MEEA constructs semantically progressive prompt chains and optimizes them using a simulated annealing strategy guided by semantic similarity, toxicity, and jailbreak effectiveness. Extensive experiments on both closed-source and open-source models, including GPT-4, Claude-3.5, and DeepSeek-R1, demonstrate that MEEA consistently achieves higher attack success rates than seven representative baselines, with an average Attack Success Rate (ASR) improvement exceeding 20%. Ablation studies further validate the necessity of both annealing-based optimization and contextual exposure mechanisms. Beyond improved attack effectiveness, our findings indicate that LLM safety behavior is inherently dynamic and history-dependent, challenging the common assumption of static alignment boundaries and highlighting the need for interaction-aware safety evaluation and defense mechanisms. Our code is available at: https://github.com/Carney-lsz/MEEA
Problem

Research questions and friction points this paper is trying to address.

Evaluates multi-turn safety robustness of LLMs
Induces gradual safety threshold shift via repeated exposure
Challenges static alignment assumption with dynamic behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Repeated low-toxicity exposure to shift safety thresholds
Semantic prompt chains optimized via simulated annealing strategy
Psychology-inspired automated framework for multi-turn jailbreaking
Jianyi Zhang
Jianyi Zhang
Research Scientist@Google Deepmind, PI@Duke University
LLMsGenerative AITrustworthy AI
S
Shizhao Liu
Beijing Electronic Science and Technology Institute
Z
Ziyin Zhou
Beijing Electronic Science and Technology Institute
Z
Zhen Li
Beijing Electronic Science and Technology Institute