DualSentinel: A Lightweight Framework for Detecting Targeted Attacks in Black-box LLM via Dual Entropy Lull Pattern

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of efficient, low-overhead real-time defense mechanisms in existing black-box large language models against targeted attacks such as backdoor injections and prompt injection. The authors propose a lightweight runtime detection framework that, for the first time, identifies and leverages a distinctive “dual-entropy valley” phenomenon—observed during the generation process upon attack triggering—as a detection criterion. By integrating dynamic monitoring based on token-level probability entropy with task-flipping verification, the framework establishes a two-stage detection mechanism. Evaluated across diverse attack scenarios, the method achieves high detection accuracy with near-zero false positive rates, while incurring negligible inference overhead, making it highly suitable for practical deployment.

Technology Category

Application Category

📝 Abstract
Recent intelligent systems integrate powerful Large Language Models (LLMs) through APIs, but their trustworthiness may be critically undermined by targeted attacks like backdoor and prompt injection attacks, which secretly force LLMs to generate specific malicious sequences. Existing defensive approaches for such threats typically rely on high access rights, impose prohibitive costs, and hinder normal inference, rendering them impractical for real-world scenarios. To solve these limitations, we introduce DualSentinel, a lightweight and unified defense framework that can accurately and promptly detect the activation of targeted attacks alongside the LLM generation process. We first identify a characteristic of compromised LLMs, termed Entropy Lull: when a targeted attack successfully hijacks the generation process, the LLM exhibits a distinct period of abnormally low and stable token probability entropy, indicating it is following a fixed path rather than making creative choices. DualSentinel leverages this pattern by developing an innovative dual-check approach. It first employs a magnitude and trend-aware monitoring method to proactively and sensitively flag an entropy lull pattern at runtime. Upon such flagging, it triggers a lightweight yet powerful secondary verification based on task-flipping. An attack is confirmed only if the entropy lull pattern persists across both the original and the flipped task, proving that the LLM's output is coercively controlled. Extensive evaluations show that DualSentinel is both highly effective (superior detection accuracy with near-zero false positives) and remarkably efficient (negligible additional cost), offering a truly practical path toward securing deployed LLMs. The source code can be accessed at https://doi.org/10.5281/zenodo.18479273.
Problem

Research questions and friction points this paper is trying to address.

targeted attacks
black-box LLM
backdoor attacks
prompt injection
entropy lull
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy Lull
Dual-check mechanism
Task-flipping verification
Black-box LLM defense
Targeted attack detection
🔎 Similar Papers
No similar papers found.
X
Xiaoyi Pang
The Hong Kong University of Science and Technology, Hong Kong
X
Xuanyi Hao
The State Key Laboratory of Blockchain and Data Security, Zhejiang University, P. R. China; School of Cyber Science and Technology, Zhejiang University, P. R. China
P
Pengyu Liu
The Hong Kong University of Science and Technology, Hong Kong
Qi Luo
Qi Luo
Ph.D. Student, HKUST(GZ)
Software EngineeringComputer Architecture
Song Guo
Song Guo
Chair Professor of CSE, HKUST
Large Language ModelEdge AIMachine Learning Systems
Zhibo Wang
Zhibo Wang
Professor at College of Computer Science and Technology, Zhejiang University
Internet of ThingsAI SecurityData Security and Privacy