🤖 AI Summary
This work addresses the limitation of existing methods that typically support only six basic 3D facial expressions, hindering fine-grained and semantically driven generation and transitions. To overcome this, the authors propose the I2FET framework, which enables text-driven synthesis of arbitrary 3D facial expressions and smooth transitions between them. The key innovations include an IFED module for multimodal alignment between textual instructions and facial expression features, and a vertex reconstruction loss to enhance semantic consistency in the latent space. Evaluated on the CK+ and CelebV-HQ datasets, the proposed method significantly outperforms current approaches, generating high-fidelity, semantically accurate, and naturally continuous 3D facial expression sequences.
📝 Abstract
A 3D avatar typically has one of six cardinal facial expressions. To simulate realistic emotional variation, we should be able to render a facial transition between two arbitrary expressions. This study presents a new framework for instruction-driven facial expression generation that produces a 3D face and, starting from an image of the face, transforms the facial expression from one designated facial expression to another. The Instruction-driven Facial Expression Decomposer (IFED) module is introduced to facilitate multimodal data learning and capture the correlation between textual descriptions and facial expression features. Subsequently, we propose the Instruction to Facial Expression Transition (I2FET) method, which leverages IFED and a vertex reconstruction loss function to refine the semantic comprehension of latent vectors, thus generating a facial expression sequence according to the given instruction. Lastly, we present the Facial Expression Transition model to generate smooth transitions between facial expressions. Extensive evaluation suggests that the proposed model outperforms state-of-the-art methods on the CK+ and CelebV-HQ datasets. The results show that our framework can generate facial expression trajectories according to text instruction. Considering that text prompts allow us to make diverse descriptions of human emotional states, the repertoire of facial expressions and the transitions between them can be expanded greatly. We expect our framework to find various practical applications.