UniCom: Unified Multimodal Modeling via Compressed Continuous Semantic Representations

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing unified multimodal models rely on discrete visual tokenizers, which often discard fine-grained semantic information, while direct modeling of continuous representations suffers from high-dimensional generation challenges and training instability. This work proposes UniCom, a framework that employs an attention-driven semantic compressor to condense dense continuous features into compact representations, enabling efficient semantic distillation along the channel dimension. By integrating a transfusion architecture, UniCom enhances training consistency and convergence speed. The study establishes, for the first time, that channel-wise compression outperforms spatial downsampling and that the transfusion architecture significantly surpasses query-based approaches in unified modeling. Notably, UniCom achieves high-fidelity, highly consistent image generation and controllable editing without requiring a VAE, setting new state-of-the-art performance in unified multimodal modeling.

Technology Category

Application Category

📝 Abstract
Current unified multimodal models typically rely on discrete visual tokenizers to bridge the modality gap. However, discretization inevitably discards fine-grained semantic information, leading to suboptimal performance in visual understanding tasks. Conversely, directly modeling continuous semantic representations (e.g., CLIP, SigLIP) poses significant challenges in high-dimensional generative modeling, resulting in slow convergence and training instability. To resolve this dilemma, we introduce UniCom, a unified framework that harmonizes multimodal understanding and generation via compressed continuous representation. We empirically demonstrate that reducing channel dimension is significantly more effective than spatial downsampling for both reconstruction and generation. Accordingly, we design an attention-based semantic compressor to distill dense features into a compact unified representation. Furthermore, we validate that the transfusion architecture surpasses query-based designs in convergence and consistency. Experiments demonstrate that UniCom achieves state-of-the-art generation performance among unified models. Notably, by preserving rich semantic priors, it delivers exceptional controllability in image editing and maintains image consistency even without relying on VAE.
Problem

Research questions and friction points this paper is trying to address.

multimodal modeling
continuous representation
discrete tokenization
generative modeling
semantic compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

compressed continuous representation
unified multimodal modeling
semantic compressor
transfusion architecture
discrete-free generation
🔎 Similar Papers
No similar papers found.