🤖 AI Summary
To address region-level color inconsistency in reference-guided line art coloring caused by character pose variations, this paper proposes a hierarchical attention mechanism with implicit semantic alignment. Within a Diffusion Transformer framework, the method achieves fine-grained semantic alignment and long-range dependency modeling via dynamic cross-image attention weighting and context-aware pooling-based spatial feature expansion. Innovatively, it unifies conditional control, hierarchical attention, and receptive field enhancement within the diffusion process, significantly improving pose robustness. On two standard benchmarks, our approach surpasses existing state-of-the-art methods across quantitative metrics—including FID and LPIPS—as well as in user studies. Qualitative results demonstrate superior regional color consistency and visual fidelity under complex pose discrepancies.
📝 Abstract
Recent advances in diffusion models have significantly improved the performance of reference-guided line art colorization. However, existing methods still struggle with region-level color consistency, especially when the reference and target images differ in character pose or motion. Instead of relying on external matching annotations between the reference and target, we propose to discover semantic correspondences implicitly through internal attention mechanisms. In this paper, we present MangaDiT, a powerful model for reference-guided line art colorization based on Diffusion Transformers (DiT). Our model takes both line art and reference images as conditional inputs and introduces a hierarchical attention mechanism with a dynamic attention weighting strategy. This mechanism augments the vanilla attention with an additional context-aware path that leverages pooled spatial features, effectively expanding the model's receptive field and enhancing region-level color alignment. Experiments on two benchmark datasets demonstrate that our method significantly outperforms state-of-the-art approaches, achieving superior performance in both qualitative and quantitative evaluations.