🤖 AI Summary
This work addresses the challenge of detecting implicit toxicity in multimodal data, where harmful semantics emerge only through cross-modal fusion and thus evade existing detection methods. To tackle this issue, the authors propose Toxicity Association Graphs (TAGs) to model semantic relationships between seemingly benign entities and latent toxic content, and introduce a Multimodal Toxicity Covertness (MTC) metric to quantify the degree of implicit toxicity. Leveraging this framework, they construct Covert Toxic Dataset—the first benchmark dataset focused on high-covert multimodal toxicity—and integrate graph neural networks with explainable AI techniques to enable interpretable and auditable toxicity detection. Experimental results demonstrate that the proposed approach consistently outperforms state-of-the-art methods across both high- and low-covertness scenarios, significantly advancing the field of explainable multimodal toxicity detection.
📝 Abstract
Detecting toxicity in multimodal data remains a significant challenge, as harmful meanings often lurk beneath seemingly benign individual modalities: only emerging when modalities are combined and semantic associations are activated. To address this, we propose a novel detection framework based on Toxicity Association Graphs (TAGs), which systematically model semantic associations between innocuous entities and latent toxic implications. Leveraging TAGs, we introduce the first quantifiable metric for hidden toxicity, the Multimodal Toxicity Covertness (MTC), which measures the degree of concealment in toxic multimodal expressions. By integrating our detection framework with the MTC metric, our approach enables precise identification of covert toxicity while preserving full interpretability of the decision-making process, significantly enhancing transparency in multimodal toxicity detection. To validate our method, we construct the Covert Toxic Dataset, the first benchmark specifically designed to capture high-covertness toxic multimodal instances. This dataset encodes nuanced cross-modal associations and serves as a rigorous testbed for evaluating both the proposed metric and detection framework. Extensive experiments demonstrate that our approach outperforms existing methods across both low- and high-covertness toxicity regimes, while delivering clear, interpretable, and auditable detection outcomes. Together, our contributions advance the state of the art in explainable multimodal toxicity detection and lay the foundation for future context-aware and interpretable approaches. Content Warning: This paper contains examples of toxic multimodal content that may be offensive or disturbing to some readers. Reader discretion is advised.