🤖 AI Summary
Limited availability of paired 3D face–skull data, low registration accuracy, and insufficient representation of craniofacial anomaly patients hinder craniofacial modeling and surgical planning. Method: We introduce the first bidirectional, deformable 3D face–skull model, leveraging a novel dense ray-matching registration technique to achieve high-accuracy, topology-preserving alignment of CT-derived skull and facial surfaces. Using over 200 high-quality, expert-validated face–skull pairs, we construct a shared principal component space enabling robust bidirectional shape inference (skull→face and face→skull). We further incorporate soft-tissue thickness variation—previously unmodeled—to enhance morphological diversity in face generation from a single skull. Contribution/Results: The model demonstrates strong robustness and precision in single-image-driven 3D reconstruction and surgical outcome prediction. Open-sourced, it establishes a new paradigm for craniofacial anomaly analysis, personalized surgical planning, and medical education.
📝 Abstract
Building a joint face-skull morphable model holds great potential for applications such as remote diagnostics, surgical planning, medical education, and physically based facial simulation. However, realizing this vision is constrained by the scarcity of paired face-skull data, insufficient registration accuracy, and limited exploration of reconstruction and clinical applications. Moreover, individuals with craniofacial deformities are often overlooked, resulting in underrepresentation and limited inclusivity. To address these challenges, we first construct a dataset comprising over 200 samples, including both normal cases and rare craniofacial conditions. Each case contains a CT-based skull, a CT-based face, and a high-fidelity textured face scan. Secondly, we propose a novel dense ray matching registration method that ensures topological consistency across face, skull, and their tissue correspondences. Based on this, we introduce the 3D Bidirectional Face-Skull Morphable Model (BFSM), which enables shape inference between the face and skull through a shared coefficient space, while also modeling tissue thickness variation to support one-to-many facial reconstructions from the same skull, reflecting individual changes such as fat over time. Finally, we demonstrate the potential of BFSM in medical applications, including 3D face-skull reconstruction from a single image and surgical planning prediction. Extensive experiments confirm the robustness and accuracy of our method. BFSM is available at https://github.com/wang-zidu/BFSM