๐ค AI Summary
Existing text-to-speech (TTS) methods are constrained by real-face driving, struggling to synthesize expressive, high-fidelity, and emotionally consistent speech from diverse stylized portraits (e.g., anime, sketch, photorealistic). This paper proposes the first cross-style portrait-driven TTS framework. We construct EMM-TTS, a high-quality multimodal expression-speech dataset; design a visual feature disentanglement mechanism to suppress background and clothing interference; and introduce diffusion-based speech synthesis, integrated with joint visual-encoderโspeech-decoder training, identity/emotion representation disentanglement, and style-adaptive feature normalization. Experiments demonstrate significant improvements over baselines in naturalness (MOS = 4.12) and emotional consistency (Emo-ACC = 86.7%). Moreover, our method enables zero-shot voice synthesis for unseen characters while preserving stylistic fidelity and emotional alignment.
๐ Abstract
Humans can perceive speakers' characteristics (e.g., identity, gender, personality and emotion) by their appearance, which are generally aligned to their voice style. Recently, vision-driven Text-to-speech (TTS) scholars grounded their investigations on real-person faces, thereby restricting effective speech synthesis from applying to vast potential usage scenarios with diverse characters and image styles. To solve this issue, we introduce a novel FaceSpeak approach. It extracts salient identity characteristics and emotional representations from a wide variety of image styles. Meanwhile, it mitigates the extraneous information (e.g., background, clothing, and hair color, etc.), resulting in synthesized speech closely aligned with a character's persona. Furthermore, to overcome the scarcity of multi-modal TTS data, we have devised an innovative dataset, namely Expressive Multi-Modal TTS, which is diligently curated and annotated to facilitate research in this domain. The experimental results demonstrate our proposed FaceSpeak can generate portrait-aligned voice with satisfactory naturalness and quality.