🤖 AI Summary
This study addresses the critical gap in ulcerative colitis endoscopic assessment research, which has been hindered by the absence of publicly available, multicenter, multimodal datasets with expert annotations and reliable benchmarks. To this end, the authors construct the first multicenter endoscopic image dataset integrating both Mayo Endoscopic Subscore (MES) and Ulcerative Colitis Endoscopic Index of Severity (UCEIS) scoring systems alongside expert-generated clinical descriptions, encompassing multiple image resolutions. They systematically evaluate the performance of convolutional neural networks, vision Transformers, hybrid architectures, and state-of-the-art vision–language models on endoscopic score classification and image captioning tasks. This work establishes a unified benchmark that supports both automated severity scoring and interpretable semantic description generation, thereby advancing research into algorithmic robustness and clinical interpretability in inflammatory bowel disease assessment.
📝 Abstract
Ulcerative colitis (UC) is a chronic mucosal inflammatory condition that places patients at increased risk of colorectal cancer. Colonoscopic surveillance remains the gold standard for assessing disease activity, and reporting typically relies on standardised endoscopic scoring metrics. The most widely used is the Mayo Endoscopic Score (MES), with some centres also adopting the Ulcerative Colitis Endoscopic Index of Severity (UCEIS). Both are descriptive assessments of mucosal inflammation (MES: 0 to 3; UCEIS: 0 to 8), where higher values indicate more severe disease. However, computational methods for automatically predicting these scores remain limited, largely due to the lack of publicly available expert-annotated datasets and the absence of robust benchmarking. There is also a significant research gap in generating clinically meaningful descriptions of UC images, despite image captioning being a well-established computer vision task. Variability in endoscopic systems and procedural workflows across centres further highlights the need for multi-centre datasets to ensure algorithmic robustness and generalisability. In this work, we introduce a curated multi-centre, multi-resolution dataset that includes expert-validated MES and UCEIS labels, alongside detailed clinical descriptions. To our knowledge, this is the first comprehensive dataset that combines dual scoring metrics for classification tasks with expert-generated captions describing mucosal appearance and clinically accepted reasoning for image captioning. This resource opens new opportunities for developing clinically meaningful multimodal algorithms. In addition to the dataset, we also provide benchmarking using convolutional neural networks, vision transformers, hybrid models, and widely used multimodal vision-language captioning algorithms.