Adversarial Text Generation with Dynamic Contextual Perturbation

๐Ÿ“… 2024-12-14
๐Ÿ›๏ธ 2024 IEEE Calcutta Conference (CALCON)
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing adversarial text generation methods predominantly rely on word-level or local perturbations, neglecting global contextual coherenceโ€”leading to semantic distortion, poor readability, and high detectability. To address this, we propose Dynamic Contextual Perturbation (DCP), the first method enabling global, dynamic, context-aware adversarial generation across sentence-, paragraph-, and document-level scopes. DCP integrates pretrained language models, iterative adversarial optimization, context-sensitive masking, and multi-granularity fluency constraints to enhance attack efficacy while preserving semantic fidelity and linguistic naturalness. Extensive experiments across multiple mainstream NLP models and benchmark datasets demonstrate that DCP significantly improves both attack success rate and human imperceptibility. Notably, it achieves an average 12.7% gain in robustness-challenging performance over state-of-the-art baselines.

Technology Category

Application Category

๐Ÿ“ Abstract
Adversarial attacks on Natural Language Processing (NLP) models expose vulnerabilities by introducing subtle perturbations to input text, often leading to misclassification while maintaining human readability. Existing methods typically focus on word-level or local text segment alterations, overlooking the broader context, which results in detectable or semantically inconsistent perturbations. We propose a novel adversarial text attack scheme named Dynamic Contextual Perturbation (DCP). DCP dynamically generates context-aware perturbations across sentences, paragraphs, and documents, ensuring semantic fidelity and fluency. Leveraging the capabilities of pre-trained language models, DCP iteratively refines perturbations through an adversarial objective function that balances the dual objectives of inducing model misclassification and preserving the naturalness of the text. This comprehensive approach allows DCP to produce more sophisticated and effective adversarial examples that better mimic natural language patterns. Our experimental results, conducted on various NLP models and datasets, demonstrate the efficacy of DCP in challenging the robustness of state-of-the-art NLP systems. By integrating dynamic contextual analysis, DCP significantly enhances the subtlety and impact of adversarial attacks. This study highlights the critical role of context in adversarial attacks and lays the groundwork for creating more robust NLP systems capable of withstanding sophisticated adversarial strategies.
Problem

Research questions and friction points this paper is trying to address.

Generating adversarial text with contextual awareness
Improving semantic fidelity in adversarial perturbations
Enhancing attack subtlety using dynamic contextual analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Contextual Perturbation for NLP attacks
Context-aware perturbations across multiple text levels
Iterative refinement with adversarial objective function
๐Ÿ”Ž Similar Papers
No similar papers found.