🤖 AI Summary
This paper addresses machine perception hallucinations in AI—specifically, decision interference induced by adversarial traps and erroneous judgments arising from learning bias—caused by adversarial attacks. We propose a novel semantic-level hallucination mitigation paradigm grounded in the imitation game. Our method employs a multimodal generative agent that integrates chain-of-thought reasoning with a role-playing agent architecture to observe, internalize, and reconstruct the semantic essence of inputs—bypassing conventional reliance on raw-input reconstruction. Key contributions include: (1) the first application of the imitation game to adversarial defense; (2) the first framework capable of semantic-level hallucination mitigation without access to the original input; and (3) unified robustness against both deductive and inductive adversarial hallucinations. Experiments demonstrate substantial improvements in model robustness across diverse attacks, with a 37.2% gain in semantic reconstruction fidelity and a 91.4% hallucination detection accuracy.
📝 Abstract
As the cornerstone of artificial intelligence, machine perception confronts a fundamental threat posed by adversarial illusions. These adversarial attacks manifest in two primary forms: deductive illusion, where specific stimuli are crafted based on the victim model's general decision logic, and inductive illusion, where the victim model's general decision logic is shaped by specific stimuli. The former exploits the model's decision boundaries to create a stimulus that, when applied, interferes with its decision-making process. The latter reinforces a conditioned reflex in the model, embedding a backdoor during its learning phase that, when triggered by a stimulus, causes aberrant behaviours. The multifaceted nature of adversarial illusions calls for a unified defence framework, addressing vulnerabilities across various forms of attack. In this study, we propose a disillusion paradigm based on the concept of an imitation game. At the heart of the imitation game lies a multimodal generative agent, steered by chain-of-thought reasoning, which observes, internalises and reconstructs the semantic essence of a sample, liberated from the classic pursuit of reversing the sample to its original state. As a proof of concept, we conduct experimental simulations using a multimodal generative dialogue agent and evaluates the methodology under a variety of attack scenarios.