🤖 AI Summary
This work addresses the challenge of line-art colorization in professional creative workflows, where both precision and flexibility are essential yet existing methods struggle to effectively accommodate diverse user constraints. The authors propose a unified multimodal colorization framework featuring a novel spatial-semantic dual-path encoding strategy, which leverages a vision-language model for semantic guidance. To enhance temporal consistency, they introduce a dense feature alignment loss and a temporal redundancy elimination mechanism. A key innovation is the adaptive spatial-semantic gating module, which dynamically resolves conflicts between multimodal inputs. Experimental results demonstrate that the proposed method significantly outperforms current approaches in controllability, visual quality, and temporal stability, offering a practical and reliable tool for real-world colorization applications.
📝 Abstract
Lineart colorization is a critical stage in professional content creation, yet achieving precise and flexible results under diverse user constraints remains a significant challenge. To address this, we propose OmniColor, a unified framework for multi-modal lineart colorization that supports arbitrary combinations of control signals. Specifically, we systematically categorize guidance signals into two types: spatially-aligned conditions and semantic-reference conditions. For spatially-aligned inputs, we employ a dual-path encoding strategy paired with a Dense Feature Alignment loss to ensure rigorous boundary preservation and precise color restoration. For semantic-reference inputs, we utilize a VLM-only encoding scheme integrated with a Temporal Redundancy Elimination mechanism to filter repetitive information and enhance inference efficiency. To resolve potential input conflicts, we introduce an Adaptive Spatial-Semantic Gating module that dynamically balances multi-modal constraints. Experimental results demonstrate that OmniColor achieves superior controllability, visual quality, and temporal stability, providing a robust and practical solution for lineart colorization. The source code and dataset will be open at https://github.com/zhangxulu1996/OmniColor.