Low-Rank-Modulated Functa: Exploring the Latent Space of Implicit Neural Representations for Interpretable Ultrasound Video Analysis

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a structured and interpretable latent space in implicit neural representations (INRs) for ultrasound video analysis, which hinders effective modeling of temporal dynamics. To this end, the authors propose Low-Rank Modulated Functa (LRM-Functa), which introduces low-rank constraints on time-resolved modulation vectors within the Functa framework, yielding a latent space with clear periodic trajectories. The method enables direct readout of key cardiac cycle frames without additional training, achieving state-of-the-art unsupervised end-diastolic (ED) and end-systolic (ES) frame detection in echocardiography using only rank k=2, while preserving ejection fraction prediction accuracy. Furthermore, LRM-Functa demonstrates strong generalization on tasks such as B-line classification in lung ultrasound, significantly enhancing model interpretability and computational efficiency.
📝 Abstract
Implicit neural representations (INRs) have emerged as a powerful framework for continuous image representation learning. In Functa-based approaches, each image is encoded as a latent modulation vector that conditions a shared INR, enabling strong reconstruction performance. However, the structure and interpretability of the corresponding latent spaces remain largely unexplored. In this work, we investigate the latent space of Functa-based models for ultrasound videos and propose Low-Rank-Modulated Functa (LRM-Functa), a novel architecture that enforces a low-rank adaptation of modulation vectors in the time-resolved latent space. When applied to cardiac ultrasound, the resulting latent space exhibits clearly structured periodic trajectories, facilitating visualization and interpretability of temporal patterns. The latent space can be traversed to sample novel frames, revealing smooth transitions along the cardiac cycle, and enabling direct readout of end-diastolic (ED) and end-systolic (ES) frames without additional model training. We show that LRM-Functa outperforms prior methods in unsupervised ED and ES frame detection, while compressing each video frame to as low as rank k=2 without sacrificing competitive downstream performance on ejection fraction prediction. Evaluations on out-of-distribution frame selection in a cardiac point-of-care dataset, as well as on lung ultrasound for B-line classification, demonstrate the generalizability of our approach. Overall, LRM-Functa provides a compact, interpretable, and generalizable framework for ultrasound video analysis. The code is available at https://github.com/JuliaWolleb/LRM_Functa.
Problem

Research questions and friction points this paper is trying to address.

Implicit Neural Representations
Latent Space Interpretability
Ultrasound Video Analysis
Temporal Patterns
Low-Rank Modulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-Rank Modulation
Implicit Neural Representations
Interpretable Latent Space
Ultrasound Video Analysis
Functa
🔎 Similar Papers
No similar papers found.