Flash-Unified: A Training-Free and Task-Aware Acceleration Framework for Native Unified Models

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high computational cost of existing native unified multimodal models in both generation and understanding tasks, a challenge exacerbated by static acceleration methods that ignore their distinct computational characteristics. The study reveals, for the first time, the presence of task-specific parameter specialization within unified models and introduces FlashUβ€”a training-free, task-aware dynamic acceleration framework. FlashU tailors optimization strategies to each task modality: it employs task-specific network pruning, dynamic layer skipping, diffusion head caching, and V-Norm proxy-based dynamic visual token pruning. Evaluated on Show-o2, FlashU achieves 1.78–2.01Γ— inference speedup while preserving state-of-the-art performance, substantially outperforming current approaches.

Technology Category

Application Category

πŸ“ Abstract
Native unified multimodal models, which integrate both generative and understanding capabilities, face substantial computational overhead that hinders their real-world deployment. Existing acceleration techniques typically employ a static, monolithic strategy, ignoring the fundamental divergence in computational profiles between iterative generation tasks (e.g., image generation) and single-pass understanding tasks (e.g., VQA). In this work, we present the first systematic analysis of unified models, revealing pronounced parameter specialization, where distinct neuron sets are critical for each task. This implies that, at the parameter level, unified models have implicitly internalized separate inference pathways for generation and understanding within a single architecture. Based on these insights, we introduce a training-free and task-aware acceleration framework, FlashU, that tailors optimization to each task's demands. Across both tasks, we introduce Task-Specific Network Pruning and Dynamic Layer Skipping, aiming to eliminate inter-layer and task-specific redundancy. For visual generation, we implement a time-varying control signal for the guidance scale and a temporal approximation for the diffusion head via Diffusion Head Cache. For multimodal understanding, building upon the pruned model, we introduce Dynamic Token Pruning via a V-Norm Proxy to exploit the spatial redundancy of visual inputs. Extensive experiments on Show-o2 demonstrate that FlashU achieves 1.78$\times$ to 2.01$\times$ inference acceleration across both understanding and generation tasks while maintaining SOTA performance, outperforming competing unified models and validating our task-aware acceleration paradigm. Our code is publicly available at https://github.com/Rirayh/FlashU.
Problem

Research questions and friction points this paper is trying to address.

unified multimodal models
computational overhead
task-aware acceleration
generation vs understanding
inference efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

task-aware acceleration
parameter specialization
training-free pruning
dynamic layer skipping
unified multimodal models
πŸ”Ž Similar Papers
No similar papers found.