🤖 AI Summary
Existing CLIP-based image quality metrics—such as CLIPScore—are vulnerable to adversarial attacks due to fragile cross-modal alignment between vision and language features. To address this, we propose FoCLIP: the first framework that explicitly introduces *feature-space misalignment* tailored for CLIPScore, achieved via gradient-based optimization to induce controlled divergence in joint image-text embedding space—boosting CLIPScore significantly while preserving visual fidelity. We further design a color-channel sensitivity–driven tampering detection method, enabling co-modeling of attack and defense. FoCLIP integrates three key components: (i) feature-alignment constraints, (ii) score-distribution balancing, and (iii) pixel-guard regularization, ensuring modality equilibrium. Extensive evaluation on artistic images and an ImageNet subset demonstrates an average CLIPScore improvement of 23.6% and tampering detection accuracy of 91%.
📝 Abstract
The well-aligned attribute of CLIP-based models enables its effective application like CLIPscore as a widely adopted image quality assessment metric. However, such a CLIP-based metric is vulnerable for its delicate multimodal alignment. In this work, we propose extbf{FoCLIP}, a feature-space misalignment framework for fooling CLIP-based image quality metric. Based on the stochastic gradient descent technique, FoCLIP integrates three key components to construct fooling examples: feature alignment as the core module to reduce image-text modality gaps, the score distribution balance module and pixel-guard regularization, which collectively optimize multimodal output equilibrium between CLIPscore performance and image quality. Such a design can be engineered to maximize the CLIPscore predictions across diverse input prompts, despite exhibiting either visual unrecognizability or semantic incongruence with the corresponding adversarial prompts from human perceptual perspectives. Experiments on ten artistic masterpiece prompts and ImageNet subsets demonstrate that optimized images can achieve significant improvement in CLIPscore while preserving high visual fidelity. In addition, we found that grayscale conversion induces significant feature degradation in fooling images, exhibiting noticeable CLIPscore reduction while preserving statistical consistency with original images. Inspired by this phenomenon, we propose a color channel sensitivity-driven tampering detection mechanism that achieves 91% accuracy on standard benchmarks. In conclusion, this work establishes a practical pathway for feature misalignment in CLIP-based multimodal systems and the corresponding defense method.