FoCLIP: A Feature-Space Misalignment Framework for CLIP-Based Image Manipulation and Detection

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing CLIP-based image quality metrics—such as CLIPScore—are vulnerable to adversarial attacks due to fragile cross-modal alignment between vision and language features. To address this, we propose FoCLIP: the first framework that explicitly introduces *feature-space misalignment* tailored for CLIPScore, achieved via gradient-based optimization to induce controlled divergence in joint image-text embedding space—boosting CLIPScore significantly while preserving visual fidelity. We further design a color-channel sensitivity–driven tampering detection method, enabling co-modeling of attack and defense. FoCLIP integrates three key components: (i) feature-alignment constraints, (ii) score-distribution balancing, and (iii) pixel-guard regularization, ensuring modality equilibrium. Extensive evaluation on artistic images and an ImageNet subset demonstrates an average CLIPScore improvement of 23.6% and tampering detection accuracy of 91%.

Technology Category

Application Category

📝 Abstract
The well-aligned attribute of CLIP-based models enables its effective application like CLIPscore as a widely adopted image quality assessment metric. However, such a CLIP-based metric is vulnerable for its delicate multimodal alignment. In this work, we propose extbf{FoCLIP}, a feature-space misalignment framework for fooling CLIP-based image quality metric. Based on the stochastic gradient descent technique, FoCLIP integrates three key components to construct fooling examples: feature alignment as the core module to reduce image-text modality gaps, the score distribution balance module and pixel-guard regularization, which collectively optimize multimodal output equilibrium between CLIPscore performance and image quality. Such a design can be engineered to maximize the CLIPscore predictions across diverse input prompts, despite exhibiting either visual unrecognizability or semantic incongruence with the corresponding adversarial prompts from human perceptual perspectives. Experiments on ten artistic masterpiece prompts and ImageNet subsets demonstrate that optimized images can achieve significant improvement in CLIPscore while preserving high visual fidelity. In addition, we found that grayscale conversion induces significant feature degradation in fooling images, exhibiting noticeable CLIPscore reduction while preserving statistical consistency with original images. Inspired by this phenomenon, we propose a color channel sensitivity-driven tampering detection mechanism that achieves 91% accuracy on standard benchmarks. In conclusion, this work establishes a practical pathway for feature misalignment in CLIP-based multimodal systems and the corresponding defense method.
Problem

Research questions and friction points this paper is trying to address.

Fooling CLIP-based image quality metrics through feature-space misalignment
Creating adversarial images that maximize CLIPscore despite visual-semantic incongruence
Developing detection methods against CLIP manipulation using color channel sensitivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature-space misalignment framework for CLIP manipulation
Gradient descent optimization with multimodal equilibrium modules
Color channel sensitivity detection for adversarial image identification
🔎 Similar Papers
No similar papers found.
Y
Yulin Chen
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
Zeyuan Wang
Zeyuan Wang
PhD, The University of Sydney
NLPMedical Informatics
T
Tianyuan Yu
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
Y
Yingmei Wei
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
L
Liang Bai
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China