๐ค AI Summary
To address the high cost and poor generalizability of pixel-level cell annotations in dense microscopic images, this paper proposes a prototype-driven weakly supervised annotation framework. Methodologically, it introduces the first integration of diffeomorphism-invariant feature learning with invertible deformation field modeling, realized via a dual-network architecture: one network learns deformation-robust feature representations, while the other estimates a differentiable and invertible registration fieldโenabling precise annotation transfer from few prototypes to target images. The framework supports arbitrary pixel-level annotation types (e.g., segmentation masks, boundaries, center points), offering both theoretical interpretability and cross-task generalization capability. Evaluated on three distinct microscopic imaging tasks, it significantly outperforms existing supervised, semi-supervised, and unsupervised methods, substantially reducing manual annotation effort. The implementation is publicly available.
๐ Abstract
The proliferation of digital microscopy images, driven by advances in automated whole slide scanning, presents significant opportunities for biomedical research and clinical diagnostics. However, accurately annotating densely packed information in these images remains a major challenge. To address this, we introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks. DiffKillR employs two complementary neural networks: one that learns a diffeomorphism-invariant feature space for robust cell matching and another that computes the precise warping field between cells for annotation mapping. Using a small set of annotated archetypes, DiffKillR efficiently propagates annotations across large microscopy images, reducing the need for extensive manual labeling. More importantly, it is suitable for any type of pixel-level annotation. We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods. The code is available at https://github.com/KrishnaswamyLab/DiffKillR.