🤖 AI Summary
Handheld optical coherence tomography angiography (OCTA) is highly susceptible to motion artifacts, often resulting in entire missing B-scans within the volumetric data and compromising the integrity of retinal vascular structures. To address this challenge, this work proposes a deep learning-based restoration framework built upon a 2.5D U-Net architecture that takes neighboring B-scan stacks as input. A novel vessel-aware multi-axis orthogonal supervision loss is introduced to jointly optimize cross-sectional clarity and volumetric projection fidelity during reconstruction. By incorporating vessel-weighted intensity recovery and enforcing consistency between axial and en face projections, the method effectively preserves vascular continuity across native and orthogonal planes. Experimental results demonstrate that the proposed approach outperforms existing techniques in both perceptual quality and pixel-level accuracy, significantly restoring capillary details, vascular network continuity, and high-quality en face projections.
📝 Abstract
Handheld Optical Coherence Tomography Angiography (OCTA) enables noninvasive retinal imaging in uncooperative or pediatric subjects, but is highly susceptible to motion artifacts that severely degrade volumetric image quality. Sudden motion during 3D acquisition can lead to unsampled retinal regions across entire B-scans (cross-sectional slices), resulting in blank bands in en face projections. We propose VAMOS-OCTA, a deep learning framework for inpainting motion-corrupted B-scans using vessel-aware multi-axis supervision. We employ a 2.5D U-Net architecture that takes a stack of neighboring B-scans as input to reconstruct a corrupted center B-scan, guided by a novel Vessel-Aware Multi-Axis Orthogonal Supervision (VAMOS) loss. This loss combines vessel-weighted intensity reconstruction with axial and lateral projection consistency, encouraging vascular continuity in native B-scans and across orthogonal planes. Unlike prior work that focuses primarily on restoring the en face MIP, VAMOS-OCTA jointly enhances both cross-sectional B-scan sharpness and volumetric projection accuracy, even under severe motion corruptions. We trained our model on both synthetic and real-world corrupted volumes and evaluated its performance using both perceptual quality and pixel-wise accuracy metrics. VAMOS-OCTA consistently outperforms prior methods, producing reconstructions with sharp capillaries, restored vessel continuity, and clean en face projections. These results demonstrate that multi-axis supervision offers a powerful constraint for restoring motion-degraded 3D OCTA data. Our source code is available at https://github.com/MedICL-VU/VAMOS-OCTA.