Tumor-anchored deep feature random forests for out-of-distribution detection in lung cancer segmentation

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Lung cancer segmentation models often yield high-confidence erroneous predictions under out-of-distribution (OOD) inputs, posing critical safety risks for clinical deployment. To address the limitations of existing logit-based methods—namely, task-specific bias—and architecture-augmentation approaches—which incur substantial computational overhead—we propose a lightweight, plug-and-play OOD detection framework. Our method requires no modification to the backbone network; instead, it extracts multi-scale deep features from tumor-anchored regions and fuses them with hierarchical representations from a self-supervised pretrained encoder, followed by OOD classification via a random forest. The framework demonstrates strong robustness across diverse segmentation architectures and pretraining strategies. On near-OOD and far-OOD benchmarks, it achieves AUROC scores of 93.50% and 99.00%, respectively—significantly surpassing both logit-based baselines and radiomics-based approaches.

Technology Category

Application Category

📝 Abstract
Accurate segmentation of cancerous lesions from 3D computed tomography (CT) scans is essential for automated treatment planning and response assessment. However, even state-of-the-art models combining self-supervised learning (SSL) pretrained transformers with convolutional decoders are susceptible to out-of-distribution (OOD) inputs, generating confidently incorrect tumor segmentations, posing risks for safe clinical deployment. Existing logit-based methods suffer from task-specific model biases, while architectural enhancements to explicitly detect OOD increase parameters and computational costs. Hence, we introduce a plug-and-play and lightweight post-hoc random forests-based OOD detection framework called RF-Deep that leverages deep features with limited outlier exposure. RF-Deep enhances generalization to imaging variations by repurposing the hierarchical features from the pretrained-then-finetuned backbone encoder, providing task-relevant OOD detection by extracting the features from multiple regions of interest anchored to the predicted tumor segmentations. Hence, it scales to images of varying fields-of-view. We compared RF-Deep against existing OOD detection methods using 1,916 CT scans across near-OOD (pulmonary embolism, negative COVID-19) and far-OOD (kidney cancer, healthy pancreas) datasets. RF-Deep achieved AUROC > 93.50 for the challenging near-OOD datasets and near-perfect detection (AUROC > 99.00) for the far-OOD datasets, substantially outperforming logit-based and radiomics approaches. RF-Deep maintained similar performance consistency across networks of different depths and pretraining strategies, demonstrating its effectiveness as a lightweight, architecture-agnostic approach to enhance the reliability of tumor segmentation from CT volumes.
Problem

Research questions and friction points this paper is trying to address.

Detects out-of-distribution inputs in lung cancer segmentation
Improves reliability of tumor segmentation from CT scans
Provides lightweight, architecture-agnostic OOD detection framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play random forests for OOD detection
Leverages deep features from pretrained encoder
Anchors detection to predicted tumor regions
🔎 Similar Papers
No similar papers found.