Deep‐Learning‐Based Facial Retargeting Using Local Patches

📅 2024-10-25
🏛️ Computer graphics forum (Print)
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of semantic distortion that arises when transferring real human facial expressions to stylized 3D characters with significantly different facial proportions and topologies. To mitigate this issue, the authors propose an end-to-end, patch-based facial retargeting method that automatically extracts local expressive regions from source videos and integrates them with the target character’s structural and motion constraints to generate semantically consistent animation parameters. The approach innovatively employs a local patch mechanism, incorporating modules for patch extraction, reenactment, and adaptive weight estimation, thereby effectively preserving the original expression semantics. Experimental results demonstrate that the method produces natural, temporally coherent, and semantically accurate facial animations even on highly exaggerated or non-humanoid 3D characters.

Technology Category

Application Category

📝 Abstract
In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch‐based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re‐enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.
Problem

Research questions and friction points this paper is trying to address.

facial retargeting
stylized characters
facial animation
semantic preservation
3D character
Innovation

Methods, ideas, or system contributions that make the work stand out.

facial retargeting
local patches
deep learning
stylized characters
semantic preservation
🔎 Similar Papers
No similar papers found.
Y
Yeonsoo Choi
Netmarble F&C, Republic of Korea
I
Inyup Lee
Visual Media Lab, KAIST, Republic of Korea
Sihun Cha
Sihun Cha
KAIST
Computer VisionComputer GraphicsFacial Animation
Seonghyeon Kim
Seonghyeon Kim
Ph.D. Student at KAIST, Visual Media Lab
Computer Graphics
S
Sunjin Jung
Visual Media Lab, KAIST, Republic of Korea
J
Jun-yong Noh
Visual Media Lab, KAIST, Republic of Korea