π€ AI Summary
Style-aware 3D character facial animation suffers from concurrent geometric and perceptual distortions. Method: We propose a novel performance-driven framework integrating traditional blendshapes with a multi-stage machine learning pipeline. It comprises a 3D emotion transfer network for cross-domain temporal emotion modeling and a blendshape adaptation network ensuring geometric consistency, supporting both offline and real-time inference. Given only 2D facial images as input, the framework outputs high-fidelity, stable, and controllable expression parameters. Contribution/Results: This is the first work to unify emotion sequence modeling and geometric stability constraints within an end-to-end learnable framework. Experiments demonstrate statistically significant improvements over Faceware (p < 0.01) in expression recognition accuracy, intensity fidelity, and subjective aesthetic appeal. The method integrates seamlessly into industrial animation pipelines, enhancing animatorsβ expressive precision and production efficiency.
π Abstract
Our purpose is to improve performance-based animation which can drive believable 3D stylized characters that are truly perceptual. By combining traditional blendshape animation techniques with multiple machine learning models, we present both non-real time and real time solutions which drive character expressions in a geometrically consistent and perceptually valid way. For the non-real time system, we propose a 3D emotion transfer network makes use of a 2D human image to generate a stylized 3D rig parameters. For the real time system, we propose a blendshape adaption network which generates the character rig parameter motions with geometric consistency and temporally stability. We demonstrate the effectiveness of our system by comparing to a commercial product Faceware. Results reveal that ratings of the recognition, intensity, and attractiveness of expressions depicted for animated characters via our systems are statistically higher than Faceware. Our results may be implemented into the animation pipeline, and provide animators with a system for creating the expressions they wish to use more quickly and accurately.