🤖 AI Summary
Existing AI models generate isolated concept representations, resulting in incompatible high-order semantic spaces across models and modalities—severely limiting interpretability and collaborative deployment.
Method: We propose a unified latent space alignment framework based on sparse autoencoders, incorporating a Global Top-K sparsity mechanism and cross-model reconstruction loss. This enforces consistent dimensional activation and semantic coherence across heterogeneous architectures (e.g., DINO, CLIP) within a shared latent space—entirely without human annotations or supervision.
Contribution: To our knowledge, this is the first fully unsupervised, concept-level alignment method. On Open Images, it achieves a Jaccard similarity of 0.80—over three times higher than prior baselines. The aligned space enables downstream tasks including text-guided object localization and cross-modal retrieval, establishing a scalable foundation for multi-model collaborative understanding.
📝 Abstract
Understanding how different AI models encode the same high-level concepts, such as objects or attributes, remains challenging because each model typically produces its own isolated representation. Existing interpretability methods like Sparse Autoencoders (SAEs) produce latent concepts individually for each model, resulting in incompatible concept spaces and limiting cross-model interpretability. To address this, we introduce SPARC (Sparse Autoencoders for Aligned Representation of Concepts), a new framework that learns a single, unified latent space shared across diverse architectures and modalities (e.g., vision models like DINO, and multimodal models like CLIP). SPARC's alignment is enforced through two key innovations: (1) a Global TopK sparsity mechanism, ensuring all input streams activate identical latent dimensions for a given concept; and (2) a Cross-Reconstruction Loss, which explicitly encourages semantic consistency between models. On Open Images, SPARC dramatically improves concept alignment, achieving a Jaccard similarity of 0.80, more than tripling the alignment compared to previous methods. SPARC creates a shared sparse latent space where individual dimensions often correspond to similar high-level concepts across models and modalities, enabling direct comparison of how different architectures represent identical concepts without requiring manual alignment or model-specific analysis. As a consequence of this aligned representation, SPARC also enables practical applications such as text-guided spatial localization in vision-only models and cross-model/cross-modal retrieval. Code and models are available at https://github.com/AtlasAnalyticsLab/SPARC.