🤖 AI Summary
This work addresses the inefficiencies in whole-slide image (WSI) preprocessing, where existing tissue detection methods either rely on inaccurate heuristic thresholds or incur substantial computational overhead, thereby hindering AI-driven pathology workflows. To overcome these limitations, we propose AtlasPatch—an efficient and scalable WSI preprocessing framework that, for the first time, leverages a fine-tuned Segment Anything Model to achieve precise tissue segmentation on large-scale, heterogeneous WSI thumbnails. AtlasPatch integrates an efficient mask extrapolation strategy with CPU/GPU parallelization to enable high-throughput patch extraction and streaming processing. Trained on approximately 30,000 semi-manually annotated WSI thumbnails, AtlasPatch achieves state-of-the-art performance in tissue segmentation accuracy, computational efficiency, and downstream multiple-instance learning tasks, significantly reducing computational costs. The code is publicly released.
📝 Abstract
Whole-slide image (WSI) preprocessing, typically comprising tissue detection followed by patch extraction, is foundational to AI-driven computational pathology workflows. This remains a major computational bottleneck as existing tools either rely on inaccurate heuristic thresholding for tissue detection, or adopt AI-based approaches trained on limited-diversity data that operate at the patch level, incurring substantial computational complexity. We present AtlasPatch, an efficient and scalable slide preprocessing framework for accurate tissue detection and high-throughput patch extraction with minimal computational overhead. AtlasPatch's tissue detection module is trained on a heterogeneous and semi-manually annotated dataset of ~30,000 WSI thumbnails, using efficient fine-tuning of the Segment-Anything model. The tool extrapolates tissue masks from thumbnails to full-resolution slides to extract patch coordinates at user-specified magnifications, with options to stream patches directly into common image encoders for embedding or store patch images, all efficiently parallelized across CPUs and GPUs. We assess AtlasPatch across segmentation precision, computational complexity, and downstream multiple-instance learning, matching state-of-the-art performance while operating at a fraction of their computational cost. AtlasPatch is open-source and available at https://github.com/AtlasAnalyticsLab/AtlasPatch.