๐ค AI Summary
Existing diffusion-based pose-driven animation methods require spatially aligned referenceโpose pairs with identical skeletal structures, limiting their applicability to misaligned or cross-structural inputs. To address this, we propose an alignment-agnostic, high-fidelity framework for character animation and image-based pose transfer. Our method introduces a self-supervised inpainting training paradigm with a unified masked input format, enabling arbitrary reference layouts. We further design identity-aware feature extraction and hybrid fusion attention to explicitly decouple appearance from skeletal structure. Additionally, identity-robust pose control and token replacement strategies enhance temporal coherence in long videos. The framework natively supports dynamic sequence lengths and multi-resolution inputs. Extensive experiments demonstrate significant improvements over state-of-the-art methods on cross-structural and layout-varying benchmarks, achieving high-quality, temporally consistent long-sequence animation generation.
๐ Abstract
Recent advances in diffusion models have greatly improved pose-driven character animation. However, existing methods are limited to spatially aligned reference-pose pairs with matched skeletal structures. Handling reference-pose misalignment remains unsolved. To address this, we present One-to-All Animation, a unified framework for high-fidelity character animation and image pose transfer for references with arbitrary layouts. First, to handle spatially misaligned reference, we reformulate training as a self-supervised outpainting task that transforms diverse-layout reference into a unified occluded-input format. Second, to process partially visible reference, we design a reference extractor for comprehensive identity feature extraction. Further, we integrate hybrid reference fusion attention to handle varying resolutions and dynamic sequence lengths. Finally, from the perspective of generation quality, we introduce identity-robust pose control that decouples appearance from skeletal structure to mitigate pose overfitting, and a token replace strategy for coherent long-video generation. Extensive experiments show that our method outperforms existing approaches. The code and model will be available at https://github.com/ssj9596/One-to-All-Animation.