Two-Stage Decoupling Framework for Variable-Length Glaucoma Prognosis

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of variable-length follow-up sequences, scarce annotations, and poor generalization of end-to-end models under small-sample conditions in glaucoma prognosis, this paper proposes a two-stage decoupled framework. In the first stage, robust temporal representations are learned from heterogeneous, multi-source longitudinal data via self-supervised learning. In the second stage, an attention mechanism enables dynamic temporal aggregation over variable-length sequences. This is the first work to decouple self-supervised feature learning from variable-length time-series modeling, thereby overcoming the limitations of end-to-end models on small-scale medical datasets and enabling cross-dataset joint training and flexible clinical deployment. Evaluated on two heterogeneous real-world datasets—OHTS and GRAPE—the model achieves significant performance gains (AUC improvement of 3.2–5.8%) while maintaining high efficiency (<1.2M parameters), demonstrating strong generalizability and clinical applicability.

Technology Category

Application Category

📝 Abstract
Glaucoma is one of the leading causes of irreversible blindness worldwide. Glaucoma prognosis is essential for identifying at-risk patients and enabling timely intervention to prevent blindness. Many existing approaches rely on historical sequential data but are constrained by fixed-length inputs, limiting their flexibility. Additionally, traditional glaucoma prognosis methods often employ end-to-end models, which struggle with the limited size of glaucoma datasets. To address these challenges, we propose a Two-Stage Decoupling Framework (TSDF) for variable-length glaucoma prognosis. In the first stage, we employ a feature representation module that leverages self-supervised learning to aggregate multiple glaucoma datasets for training, disregarding differences in their supervisory information. This approach enables datasets of varying sizes to learn better feature representations. In the second stage, we introduce a temporal aggregation module that incorporates an attention-based mechanism to process sequential inputs of varying lengths, ensuring flexible and efficient utilization of all available data. This design significantly enhances model performance while maintaining a compact parameter size. Extensive experiments on two benchmark glaucoma datasets:the Ocular Hypertension Treatment Study (OHTS) and the Glaucoma Real-world Appraisal Progression Ensemble (GRAPE),which differ significantly in scale and clinical settings,demonstrate the effectiveness and robustness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Predicting glaucoma progression with variable-length sequential data
Overcoming limited dataset size through self-supervised learning
Enabling flexible input lengths while maintaining model efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning aggregates multiple datasets
Attention mechanism processes variable-length sequences
Two-stage framework enhances performance compactly
🔎 Similar Papers
No similar papers found.