🤖 AI Summary
This work addresses the challenge of high-fidelity generation of anatomical structures, which exhibit complex geometries and diverse topologies. To this end, we propose an implicit diffusion-based generative framework that incorporates skeletal priors. Our method introduces differentiable skeletonization into medical shape generation for the first time, leveraging a differentiable skeleton module to extract global topological structure and fuse it with local surface features. Diffusion modeling is performed in the signed distance function (SDF) implicit space, followed by a neural implicit decoder to produce high-quality shapes. We contribute MedSDF, a large-scale, multi-category medical shape dataset, and demonstrate superior generation and reconstruction performance over existing methods on both MedSDF and a vascular dataset, while maintaining higher computational efficiency.
📝 Abstract
Anatomy shape modeling is a fundamental problem in medical data analysis. However, the geometric complexity and topological variability of anatomical structures pose significant challenges to accurate anatomical shape generation. In this work, we propose a skeletal latent diffusion framework that explicitly incorporates structural priors for efficient and high-fidelity medical shape generation. We introduce a shape auto-encoder in which the encoder captures global geometric information through a differentiable skeletonization module and aggregates local surface features into shape latents, while the decoder predicts the corresponding implicit fields over sparsely sampled coordinates. New shapes are generated via a latent-space diffusion model, followed by neural implicit decoding and mesh extraction. To address the limited availability of medical shape data, we construct a large-scale dataset, \textit{MedSDF}, comprising surface point clouds and corresponding signed distance fields across multiple anatomical categories. Extensive experiments on MedSDF and vessel datasets demonstrate that the proposed method achieves superior reconstruction and generation quality while maintaining a higher computational efficiency compared with existing approaches. Code is available at: https://github.com/wlsdzyzl/meshage.