OCELOT 2023: Cell Detection from Cell-Tissue Interaction Challenge

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing cell detection models fail to capture pathologists’ collaborative multi-scale cognitive process—simultaneously evaluating tissue morphology and cellular details—and lack annotated data supporting cross-scale semantic learning. Method: We introduce the first multi-organ whole-slide image dataset featuring overlapping cell- and tissue-level annotations across multiple scales; propose a cell-tissue joint modeling paradigm, empirically validating the critical role of their interaction in detection performance; and design a multi-scale feature fusion module with cross-hierarchical interactive attention to explicitly model biologically grounded dependencies between cells and tissues. Contribution/Results: On the test set, our best-performing model achieves a 7.99-point improvement in F1-score over single-task baselines and significantly outperforms conventional methods. This work establishes a novel, interpretable, multi-scale paradigm for computational pathology analysis.

Technology Category

Application Category

📝 Abstract
Pathologists routinely alternate between different magnifications when examining Whole-Slide Images, allowing them to evaluate both broad tissue morphology and intricate cellular details to form comprehensive diagnoses. However, existing deep learning-based cell detection models struggle to replicate these behaviors and learn the interdependent semantics between structures at different magnifications. A key barrier in the field is the lack of datasets with multi-scale overlapping cell and tissue annotations. The OCELOT 2023 challenge was initiated to gather insights from the community to validate the hypothesis that understanding cell and tissue (cell-tissue) interactions is crucial for achieving human-level performance, and to accelerate the research in this field. The challenge dataset includes overlapping cell detection and tissue segmentation annotations from six organs, comprising 673 pairs sourced from 306 The Cancer Genome Atlas (TCGA) Whole-Slide Images with hematoxylin and eosin staining, divided into training, validation, and test subsets. Participants presented models that significantly enhanced the understanding of cell-tissue relationships. Top entries achieved up to a 7.99 increase in F1-score on the test set compared to the baseline cell-only model that did not incorporate cell-tissue relationships. This is a substantial improvement in performance over traditional cell-only detection methods, demonstrating the need for incorporating multi-scale semantics into the models. This paper provides a comparative analysis of the methods used by participants, highlighting innovative strategies implemented in the OCELOT 2023 challenge.
Problem

Research questions and friction points this paper is trying to address.

Existing cell detection models lack multi-scale tissue interaction understanding
No datasets with overlapping cell-tissue annotations across different magnifications
Models need to incorporate cell-tissue relationships for human-level performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale overlapping cell-tissue annotations
Incorporating cell-tissue interaction relationships
Fusing multi-scale semantics for detection
🔎 Similar Papers
No similar papers found.
J
JaeWoong Shin
Lunit Inc., Seoul, Republic of Korea
J
Jeongun Ryu
Lunit Inc., Seoul, Republic of Korea
A
Aaron Valero Puche
Lunit Inc., Seoul, Republic of Korea
Jinhee Lee
Jinhee Lee
Lunit Inc., Seoul, Republic of Korea
Biagio Brattoli
Biagio Brattoli
Research scientist at Lunit, previously AWS. PhD at Heidelberg University
Computer VisionDeep learningMachine learningArtificial Intelligence
W
Wonkyung Jung
Lunit Inc., Seoul, Republic of Korea
Soo Ick Cho
Soo Ick Cho
INSKIN LAB / GIGA Study
Atopic dermatitisPsoriasisArtificial Intelligence
K
Kyunghyun Paeng
Lunit Inc., Seoul, Republic of Korea
Chan-Young Ock
Chan-Young Ock
Lunit Inc., Seoul, Republic of Korea
Donggeun Yoo
Donggeun Yoo
Lunit Inc.
Deep LearningComputer VisionVisual Recognition
Zhaoyang Li
Zhaoyang Li
Ph.D student, University of Science and Technology of China
Computer Vision
W
Wangkai Li
University of Science and Technology of China, Hefei, China
Huayu Mai
Huayu Mai
Ph.D student, University of Science and Technology of China
Computer Vision
J
Joshua Millward
School of Computing, Engineering and Mathematical Sciences, La Trobe University, Melbourne, Australia
Z
Zhen He
School of Computing, Engineering and Mathematical Sciences, La Trobe University, Melbourne, Australia
Aiden Nibali
Aiden Nibali
School of Computing, Engineering and Mathematical Sciences, La Trobe University, Melbourne, Australia
L
Lydia Anette Schoenpflug
Department of Pathology and Molecular Pathology, University Hospital of Zürich, University of Zürich, Zürich, Switzerland
V
Viktor Hendrik Koelzer
Department of Pathology and Molecular Pathology, University Hospital of Zürich, University of Zürich, Zürich, Switzerland; Institute of Medical Genetics and Pathology, University Hospital Basel, University of Basel, Basel, Switzerland; Department of Oncology, University of Oxford, Oxford, UK
X
Xu Shuoyu
Bio-totem Pte Ltd, Foshan, China
J
Ji Zheng
Bio-totem Pte Ltd, Foshan, China
H
Hu Bin
Bio-totem Pte Ltd, Foshan, China
Y
Yu-Wen Lo
Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
C
Ching-Hui Yang
Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
S
Sérgio Pereira
Lunit Inc., Seoul, Republic of Korea