🤖 AI Summary
Existing vision-language models treat image understanding and editing as disjoint tasks, hindering unified support for referential-expression-driven interactive image editing. To address this, we propose the first unified architecture jointly optimizing segmentation-aware perception and generative modeling. Our method introduces a dual-branch visual encoder and a MoVQGAN tokenizer, leveraging referential segmentation masks as spatial conditioning to progressively guide a diffusion-based decoder for object-level controllable generation. The framework end-to-end integrates referential segmentation perception with object-centric generation, eliminating cascaded multi-model pipelines. Evaluated across three core tasks—multimodal understanding, referring expression segmentation, and controllable image generation—our approach achieves state-of-the-art performance. Notably, it significantly enhances segmentation-generation synergy, establishing a scalable, unified paradigm for interactive visual editing.
📝 Abstract
Recent Large Vision Language Models (LVLMs) demonstrate promising capabilities in unifying visual understanding and generative modeling, enabling both accurate content understanding and flexible editing. However, current approaches treat"what to see"and"how to edit"separately: they either perform isolated object segmentation or utilize segmentation masks merely as conditional prompts for local edit generation tasks, often relying on multiple disjointed models. To bridge these gaps, we introduce FOCUS, a unified LVLM that integrates segmentation-aware perception and controllable object-centric generation within an end-to-end framework. FOCUS employs a dual-branch visual encoder to simultaneously capture global semantic context and fine-grained spatial details. In addition, we leverage a MoVQGAN-based visual tokenizer to produce discrete visual tokens that enhance generation quality. To enable accurate and controllable image editing, we propose a progressive multi-stage training pipeline, where segmentation masks are jointly optimized and used as spatial condition prompts to guide the diffusion decoder. This strategy aligns visual encoding, segmentation, and generation modules, effectively bridging segmentation-aware perception with fine-grained visual synthesis. Extensive experiments across three core tasks, including multimodal understanding, referring segmentation accuracy, and controllable image generation, demonstrate that FOCUS achieves strong performance by jointly optimizing visual perception and generative capabilities.