🤖 AI Summary
Animation characters exhibit high diversity in appearance, motion, and deformation, leading to low accuracy in existing recognition systems and hindering character-centric content understanding and accessibility applications. To address this, we propose the first multimodal animation character recognition framework, introducing a synchronized audio-visual “Character Bank” that mitigates long-tail distribution challenges via online resource mining, cross-modal feature alignment, and retrieval. The framework integrates automatic speech recognition and subtitle generation modules to support both audio descriptions and character-aware subtitle generation. Evaluated on the newly released CMD-AM dataset, our method significantly outperforms conventional face-detection-based approaches, achieving state-of-the-art performance in character identification accuracy and downstream accessibility tasks—including audio description and inclusive subtitle generation—thereby providing critical technical support for cinematic accessibility for persons with disabilities.
📝 Abstract
Animated movies are captivating for their unique character designs and imaginative storytelling, yet they pose significant challenges for existing recognition systems. Unlike the consistent visual patterns detected by conventional face recognition methods, animated characters exhibit extreme diversity in their appearance, motion, and deformation. In this work, we propose an audio-visual pipeline to enable automatic and robust animated character recognition, and thereby enhance character-centric understanding of animated movies. Central to our approach is the automatic construction of an audio-visual character bank from online sources. This bank contains both visual exemplars and voice (audio) samples for each character, enabling subsequent multi-modal character recognition despite long-tailed appearance distributions. Building on accurate character recognition, we explore two downstream applications: Audio Description (AD) generation for visually impaired audiences, and character-aware subtitling for the hearing impaired. To support research in this domain, we introduce CMD-AM, a new dataset of 75 animated movies with comprehensive annotations. Our character-centric pipeline demonstrates significant improvements in both accessibility and narrative comprehension for animated content over prior face-detection-based approaches. For the code and dataset, visit https://www.robots.ox.ac.uk/~vgg/research/animated_ad/.