Benchmark of Segmentation Techniques for Pelvic Fracture in CT and X-ray: Summary of the PENGWIN 2024 Challenge

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pelvic fracture fragment segmentation in CT/X-ray images remains challenging due to anatomical complexity and imaging artifacts. Method: We constructed a multi-center CT dataset augmented with synthetic X-ray images generated via DeepDRR, and organized a benchmark challenge involving 16 international teams. Contribution/Results: This study is the first to systematically characterize semantic ambiguity in fracture fragment definitions across modalities. We propose a comprehensive evaluation framework centered on fragment-level IoU, complemented by Dice score and clinically interpretable metrics; we further demonstrate that instance representation strategies—e.g., boundary-core separation—significantly impact performance. State-of-the-art CT segmentation achieves an IoU of 0.930, while X-ray segmentation reaches 0.774. Our findings underscore the necessity of interactive human-in-the-loop segmentation, providing both methodological foundations and practical pathways for intelligent orthopedic image diagnosis and surgical planning.

Technology Category

Application Category

📝 Abstract
The segmentation of pelvic fracture fragments in CT and X-ray images is crucial for trauma diagnosis, surgical planning, and intraoperative guidance. However, accurately and efficiently delineating the bone fragments remains a significant challenge due to complex anatomy and imaging limitations. The PENGWIN challenge, organized as a MICCAI 2024 satellite event, aimed to advance automated fracture segmentation by benchmarking state-of-the-art algorithms on these complex tasks. A diverse dataset of 150 CT scans was collected from multiple clinical centers, and a large set of simulated X-ray images was generated using the DeepDRR method. Final submissions from 16 teams worldwide were evaluated under a rigorous multi-metric testing scheme. The top-performing CT algorithm achieved an average fragment-wise intersection over union (IoU) of 0.930, demonstrating satisfactory accuracy. However, in the X-ray task, the best algorithm attained an IoU of 0.774, highlighting the greater challenges posed by overlapping anatomical structures. Beyond the quantitative evaluation, the challenge revealed methodological diversity in algorithm design. Variations in instance representation, such as primary-secondary classification versus boundary-core separation, led to differing segmentation strategies. Despite promising results, the challenge also exposed inherent uncertainties in fragment definition, particularly in cases of incomplete fractures. These findings suggest that interactive segmentation approaches, integrating human decision-making with task-relevant information, may be essential for improving model reliability and clinical applicability.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking segmentation techniques for pelvic fractures in CT and X-ray images
Addressing challenges in delineating bone fragments due to complex anatomy
Evaluating algorithm performance and uncertainties in fracture fragment definition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarked segmentation techniques for pelvic fractures
Used DeepDRR for simulated X-ray image generation
Evaluated algorithms with multi-metric testing scheme
🔎 Similar Papers
No similar papers found.
Y
Yudi Sang
Beijing Rossum Robot Technology Co., Ltd., Beijing, China
Y
Yanzhen Liu
Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
S
Sutuke Yibulayimu
Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
Y
Yunning Wang
Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
B
Benjamin Killeen
Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
M
Mingxu Liu
Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
P
Ping-Cheng Ku
Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
Ole Johannsen
Ole Johannsen
Machine Learning Engineer, DKFZ, Heidelberg
Instance SegmentationLightfields3D ReconstructionVariational MethodsBenchmarking
Karol Gotkowski
Karol Gotkowski
Deutsches Krebsforschungszentrum
medical AIsplicing detection and localization
Maximilian Zenk
Maximilian Zenk
DKFZ Heidelberg
Machine LearningDeep LearningComputer VisionMecial Image Analysis
K
Klaus Maier-Hein
Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
Fabian Isensee
Fabian Isensee
HIP Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center
Computer VisionDeep LearningSegmentationMedical Image Computing
Peiyan Yue
Peiyan Yue
Shenzhen University
Y
Yi Wang
Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
H
Haidong Yu
Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
Z
Zhaohong Pan
Y
Yutong He
X
Xiaokun Liang
Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
D
Daiqi Liu
Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
F
Fuxin Fan
Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
A
Artur Jurgas
Department of Measurement and Electronics, AGH University of Krakow, Krakow, Poland; MedApp S.A., Krakow, Poland
Andrzej Skalski
Andrzej Skalski
Department of Measurement and Electronics, AGH University of Science and Technology, MedApp S.A.
Image Processing & AnalysisXRAugmented RealityComputer VisionPattern Recognition
Yuxi Ma
Yuxi Ma
Institute for Artificial Intelligence, Peking University
psychologyartificial intelligencecognition
J
Jing Yang
National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
Szymon Płotka
Szymon Płotka
Jagiellonian University
Machine LearningDeep LearningComputer VisionMedical Imaging
R
Rafał Litka
G
Gang Zhu
Beijing Rossum Robot Technology Co., Ltd., Beijing, China
Y
Yingchun Song
Beijing Rossum Robot Technology Co., Ltd., Beijing, China
Mathias Unberath
Mathias Unberath
Johns Hopkins University
Medical RoboticsComputer VisionAI/MLExtended RealityHCI
Mehran Armand
Mehran Armand
Professor, Mechanical Engineering, I3R, University of Arkansas
RoboticsMedical RoboticsImage-Guided Interventionsbiomechanics
D
Dan Ruan
S
S. Kevin Zhou
Q
Qiyong Cao
C
Chunpeng Zhao
X
Xinbao Wu
Y
Yu Wang
Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China