🤖 AI Summary
Existing unified multimodal models struggle to simultaneously preserve the abstract semantics required for visual understanding and the fine-grained details essential for generation, often suffering from representation inconsistencies due to encoder decoupling or quantization. This work proposes HYDRA-TOK, the first pure Vision Transformer (ViT) architecture that natively unifies perception and generation within a single parameter space. It introduces a Generation-Semantics Bottleneck (GSB) to progressively restructure the backbone into a unified representation spanning a structure-preserving generative view (Gen-ViT) and a semantic encoding stream (Sem-ViT). The method achieves state-of-the-art performance across diverse benchmarks, including visual reconstruction (rFID 0.08), generative evaluation (GenEval 0.86), and eight visual understanding tasks, outperforming prior approaches by an average of 10.0 points on understanding metrics.
📝 Abstract
Unified Multimodal Models struggle to bridge the fundamental gap between the abstract representations needed for visual understanding and the detailed primitives required for generation. Existing approaches typically compromise by employing decoupled encoders, stacking representation encoder atop VAEs, or utilizing discrete quantization. However, these methods often disrupt information coherence and lead to optimization conflicts. To this end, we introduce HYDRA-TOK, a representation-harmonized pure ViT in the insight that visual modeling should evolve from generation to understanding. HYDRA-TOK reformulates the standard backbone into a progressive learner that transitions from a Gen-ViT, which captures structure-preserving primitives, to a Sem-ViT for semantic encoding. Crucially, this transition is mediated by a Generation-Semantic Bottleneck (GSB), which compresses features into a low-dimensional space to filter noise for robust synthesis, then restores dimensionality to empower complex semantic comprehension. Built upon this foundation, we present HYDRA, a native unified framework integrating perception and generation within a single parameter space. Extensive experiments establish HYDRA as a new state-of-the-art. It sets a benchmark in visual reconstruction (rFID 0.08) and achieves top-tier generation performance on GenEval (0.86), DPG-Bench (86.4), and WISE (0.53), while simultaneously outperforming previous native UMMs by an average of 10.0 points across eight challenging understanding benchmarks.