🤖 AI Summary
This paper addresses the substantial storage and computational overheads induced by high-dimensional multimodal embeddings, as well as the challenge of preserving cross-modal semantic consistency. We propose a fine-tuning-free semantic compression method. Its core insight is that smaller inter-modal semantic gaps correlate with higher compressibility; thus, we replace original high-dimensional embeddings with shared semantic representatives—namely, cluster centroids derived from each modality’s embedding space. Leveraging pretrained encoders and multimodal alignment techniques, our approach constructs a unified semantic center to yield compact, cross-modal representations. Extensive evaluation across multiple large-scale multimodal benchmarks demonstrates: (i) significant memory reduction (average compression rate >60%), (ii) zero downstream performance degradation, (iii) modality-agnostic applicability, and (iv) high deployment efficiency.
📝 Abstract
Multimodal representation learning produces high-dimensional embeddings that align diverse modalities in a shared latent space. While this enables strong generalization, it also introduces scalability challenges, both in terms of storage and downstream processing. A key open problem is how to achieve semantic compression, reducing the memory footprint of multimodal embeddings while preserving their ability to represent shared semantic content across modalities. In this paper, we prove a strong connection between reducing the modality gap, which is the residual separation of embeddings from different modalities, and the feasibility of post-training semantic compression. When the gap is sufficiently reduced, embeddings from different modalities but expressing the same semantics share a common portion of the space. Therefore, their centroid is a faithful representation of such a semantic concept. This enables replacing multiple embeddings with a single centroid, yielding significant memory savings. We propose a novel approach for semantic compression grounded on the latter intuition, operating directly on pretrained encoders. We demonstrate its effectiveness across diverse large-scale multimodal downstream tasks. Our results highlight that modality alignment is a key enabler for semantic compression, showing that the proposed approach achieves significant compression without sacrificing performance.