Learning Generalizable Features for Tibial Plateau Fracture Segmentation Using Masked Autoencoder and Limited Annotations

๐Ÿ“… 2025-02-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenges of scarce annotated data, highly variable fracture morphology, and poor cross-dataset generalizability in tibial plateau fracture (TPF) segmentation from CT images, this work introduces the Masked Autoencoder (MAE) to medical image segmentation for the first time. Our method employs an unsupervised MAE pretraining stage to jointly model global bone anatomy and fine-grained fracture characteristics, followed by few-shot fine-tuning using a 3D U-Net architecture with Dice loss and boundary-aware optimization. With only 20 annotated cases, our approach achieves 95.81% Dice score, 1.91 mm average surface distance (ASSD), and 9.42 mm 95th-percentile Hausdorff distance (95HD) on an internal test set of 180 cases. Moreover, it demonstrates strong transferability on a public hip fracture dataset. This work significantly enhances robustness and generalizability of fracture segmentation under few-shot learning conditions.

Technology Category

Application Category

๐Ÿ“ Abstract
Accurate automated segmentation of tibial plateau fractures (TPF) from computed tomography (CT) requires large amounts of annotated data to train deep learning models, but obtaining such annotations presents unique challenges. The process demands expert knowledge to identify diverse fracture patterns, assess severity, and account for individual anatomical variations, making the annotation process highly time-consuming and expensive. Although semi-supervised learning methods can utilize unlabeled data, existing approaches often struggle with the complexity and variability of fracture morphologies, as well as limited generalizability across datasets. To tackle these issues, we propose an effective training strategy based on masked autoencoder (MAE) for the accurate TPF segmentation in CT. Our method leverages MAE pretraining to capture global skeletal structures and fine-grained fracture details from unlabeled data, followed by fine-tuning with a small set of labeled data. This strategy reduces the dependence on extensive annotations while enhancing the model's ability to learn generalizable and transferable features. The proposed method is evaluated on an in-house dataset containing 180 CT scans with TPF. Experimental results demonstrate that our method consistently outperforms semi-supervised methods, achieving an average Dice similarity coefficient (DSC) of 95.81%, average symmetric surface distance (ASSD) of 1.91mm, and Hausdorff distance (95HD) of 9.42mm with only 20 annotated cases. Moreover, our method exhibits strong transferability when applying to another public pelvic CT dataset with hip fractures, highlighting its potential for broader applications in fracture segmentation tasks.
Problem

Research questions and friction points this paper is trying to address.

Reduces need for extensive annotated data
Improves generalizability in fracture segmentation
Enhances transferability across different datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked autoencoder for feature learning
Reduces need for extensive annotations
Enhances model generalizability and transferability
๐Ÿ”Ž Similar Papers
No similar papers found.
Peiyan Yue
Peiyan Yue
Shenzhen University
D
Die Cai
Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second Peopleโ€™s Hospital, Shenzhen, 518035, China
C
Chu Guo
Wuhan Mindray Scientific Co., Ltd, Wuhan, 430010, China
M
Mengxing Liu
Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Shenzhen, 518132, China; Wuhan Mindray Scientific Co., Ltd, Wuhan, 430010, China
J
Jun Xia
Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second Peopleโ€™s Hospital, Shenzhen, 518035, China
Y
Yi Wang
Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China