🤖 AI Summary
Chest X-ray (CXR) imaging suffers from anatomical superposition due to its inherent 2D projection nature, limiting representation learning and diagnostic fidelity. To address this, we propose X-WIN, the first world model that distills 3D CT anatomical knowledge into the CXR representation space. X-WIN unifies real and synthetically generated multi-view CXRs via affinity-guided contrastive alignment, masked image modeling, and a domain classifier; it enables cross-modal prediction and self-supervised training in the latent space. Evaluated on multiple downstream tasks, X-WIN outperforms existing foundation models in both linear probing and few-shot fine-tuning settings. Notably, it supports inverse reconstruction of 3D CT volumes from CXR inputs—demonstrating strong generalization capability and geometric consistency. This establishes a principled bridge between 2D radiography and 3D anatomy, advancing self-supervised representation learning for thoracic imaging.
📝 Abstract
Chest X-ray radiography (CXR) is an essential medical imaging technique for disease diagnosis. However, as 2D projectional images, CXRs are limited by structural superposition and hence fail to capture 3D anatomies. This limitation makes representation learning and disease diagnosis challenging. To address this challenge, we propose a novel CXR world model named X-WIN, which distills volumetric knowledge from chest computed tomography (CT) by learning to predict its 2D projections in latent space. The core idea is that a world model with internalized knowledge of 3D anatomical structure can predict CXRs under various transformations in 3D space. During projection prediction, we introduce an affinity-guided contrastive alignment loss that leverages mutual similarities to capture rich, correlated information across projections from the same volume. To improve model adaptability, we incorporate real CXRs into training through masked image modeling and employ a domain classifier to encourage statistically similar representations for real and simulated CXRs. Comprehensive experiments show that X-WIN outperforms existing foundation models on diverse downstream tasks using linear probing and few-shot fine-tuning. X-WIN also demonstrates the ability to render 2D projections for reconstructing a 3D CT volume.