🤖 AI Summary
Existing dimensionality reduction methods (e.g., PCA, t-SNE) for cross-modal embedding visualization neglect cross-modal metrics such as CLIPScore, leading to misalignment between visual representations and semantic consistency. To address this, we propose a metric-consistent adaptive kernel regression approach for dimensionality reduction. Our method introduces a learnable generalized adaptive kernel embedding objective, jointly optimizing both the projection network and kernel parameters. It leverages cross-modal metrics (e.g., CLIPScore) as supervision signals and incorporates a post-projection kernel regression loss, enabling interactive zooming and multimodal overlay analysis. Experiments demonstrate that our method significantly outperforms baselines—including PCA and t-SNE—on quantitative metrics of fidelity, trustworthiness, and structural preservation. Furthermore, it enables effective comparative analysis of embeddings from text-to-image generative models.
📝 Abstract
Cross-modal embeddings form the foundation for multi-modal models. However, visualization methods for interpreting cross-modal embeddings have been primarily confined to traditional dimensionality reduction (DR) techniques like PCA and t-SNE. These DR methods primarily focus on feature distributions within a single modality, whilst failing to incorporate metrics (e.g., CLIPScore) across multiple modalities.This paper introduces AKRMap, a new DR technique designed to visualize cross-modal embeddings metric with enhanced accuracy by learning kernel regression of the metric landscape in the projection space. Specifically, AKRMap constructs a supervised projection network guided by a post-projection kernel regression loss, and employs adaptive generalized kernels that can be jointly optimized with the projection. This approach enables AKRMap to efficiently generate visualizations that capture complex metric distributions, while also supporting interactive features such as zoom and overlay for deeper exploration. Quantitative experiments demonstrate that AKRMap outperforms existing DR methods in generating more accurate and trustworthy visualizations. We further showcase the effectiveness of AKRMap in visualizing and comparing cross-modal embeddings for text-to-image models. Code and demo are available at https://github.com/yilinye/AKRMap.