Don't Mind the Gaps: Implicit Neural Representations for Resolution-Agnostic Retinal OCT Analysis

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges posed by anisotropic resolution in retinal OCT volumes—stemming from large inter-B-scan spacing—and the inconsistency and resolution dependency of conventional 2D methods in 3D analysis. To overcome these limitations, the study introduces, for the first time, a general-purpose implicit neural representation (INR) framework for retinal OCT analysis. By leveraging coordinate-driven continuous modeling that integrates en-face multimodal information, the method enables high-fidelity interpolation between B-scans and constructs a population-trained, generalizable neural atlas. This approach facilitates resolution-agnostic, high-quality 3D reconstruction, significantly improving the consistency of retinal layer segmentation and enabling robust, cross-device, and cross-protocol analysis under diverse imaging conditions.

Technology Category

Application Category

📝 Abstract
Routine clinical imaging of the retina using optical coherence tomography (OCT) is performed with large slice spacing, resulting in highly anisotropic images and a sparsely scanned retina. Most learning-based methods circumvent the problems arising from the anisotropy by using 2D approaches rather than performing volumetric analyses. These approaches inherently bear the risk of generating inconsistent results for neighboring B-scans. For example, 2D retinal layer segmentations can have irregular surfaces in 3D. Furthermore, the typically used convolutional neural networks are bound to the resolution of the training data, which prevents their usage for images acquired with a different imaging protocol. Implicit neural representations (INRs) have recently emerged as a tool to store voxelized data as a continuous representation. Using coordinates as input, INRs are resolution-agnostic, which allows them to be applied to anisotropic data. In this paper, we propose two frameworks that make use of this characteristic of INRs for dense 3D analyses of retinal OCT volumes. 1) We perform inter-B-scan interpolation by incorporating additional information from en-face modalities, that help retain relevant structures between B-scans. 2) We create a resolution-agnostic retinal atlas that enables general analysis without strict requirements for the data. Both methods leverage generalizable INRs, improving retinal shape representation through population-based training and allowing predictions for unseen cases. Our resolution-independent frameworks facilitate the analysis of OCT images with large B-scan distances, opening up possibilities for the volumetric evaluation of retinal structures and pathologies.
Problem

Research questions and friction points this paper is trying to address.

optical coherence tomography
anisotropic imaging
resolution-agnostic analysis
retinal segmentation
volumetric analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit Neural Representations
Resolution-Agnostic
Retinal OCT
3D Interpolation
Retinal Atlas
🔎 Similar Papers
No similar papers found.
B
Bennet Kahrs
German Research Center for Artificial Intelligence, Luebeck, DE
J
Julia Andresen
Institute of Medical Informatics, University of Luebeck, Luebeck, DE
F
Fenja Falta
Institute of Medical Informatics, University of Luebeck, Luebeck, DE
M
M. Santarossa
Multimedia Information Processing Group, Kiel University, Kiel, DE
Heinz Handels
Heinz Handels
Professor of Medical Informatics, Director DFKI, University of Lübeck
Medical Image ComputingArtificial IntelligenceDeep LearningVirtual Reality Simulations
T
Timo Kepp
German Research Center for Artificial Intelligence, Luebeck, DE