CXR-LT 2026 Challenge: Projection-Aware Multi-Label and Zero-Shot Chest X-Ray Classification

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dual challenge of multi-label classification for known pathologies and zero-shot classification for unseen pathologies in chest X-ray images by proposing a projection-aware unified framework. The framework integrates view-specific models to handle multi-label tasks and employs a dual-branch architecture that combines contrastive learning, asymmetric loss (ASL), and semantic prompts generated by large language models to enhance generalization to novel pathologies. Strong data augmentation and test-time augmentation strategies are further incorporated to mitigate the long-tailed class distribution and improve robustness. Experimental results demonstrate that the proposed method significantly outperforms existing approaches under both multi-label and zero-shot settings, achieving notable performance gains particularly in recognizing rare and previously unseen pathologies.
📝 Abstract
This challenge tackles multi-label classification for known chest X-ray (CXR) lesions and zero-shot classification for unseen ones. To handle diverse CXR projections, we integrate projection-specific models via a classification network into a unified framework. For zero-shot classification (Task 2), we extend CheXzero with a novel dual-branch architecture that combines contrastive learning, Asymmetric Loss (ASL), and LLM-generated descriptive prompts. This effectively mitigates severe long-tail imbalances and maximizes zero-shot generalization. Additionally, strong data and test-time augmentations (TTA) ensure robustness across both tasks.
Problem

Research questions and friction points this paper is trying to address.

multi-label classification
zero-shot classification
chest X-ray
projection-aware
long-tail imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

projection-aware
zero-shot classification
dual-branch architecture
contrastive learning
LLM-generated prompts
🔎 Similar Papers
No similar papers found.
J
Juno Cho
KAIST, Daejeon, South Korea
D
Dohui Kim
GIST, Gwangju, South Korea
M
Mingeon Kim
KAIST, Daejeon, South Korea
H
Hyunseo Jang
Korea University, Seoul, South Korea
C
Chang Sun Lee
KAIST Graduate School of AI (GSAI), Seoul, South Korea
Jong Chul Ye
Jong Chul Ye
Professor, Chung Moon Soul Chair, Graduate School of AI, KAIST
machine learningcomputational imagingmedical imagingsignal processingcompressed sensing