DistillFSS: Synthesizing Few-Shot Knowledge into a Lightweight Segmentation Model

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
CD-FSS faces three core challenges: cross-domain distribution shift, non-overlapping label spaces, and scarce support samples—leading to inefficient inference and poor generalization in existing methods. To address these, we propose a knowledge distillation–based parametric few-shot semantic segmentation framework. Leveraging a teacher-student architecture, it explicitly internalizes support-set knowledge into designated layers of the student network, enabling lightweight, support-image-free inference at test time. We further introduce parametric knowledge synthesis and hierarchical distillation to jointly enhance few-shot robustness and multi-shot differentiability. Evaluated on new benchmarks in medical imaging, industrial inspection, and remote sensing, our method significantly reduces computational overhead (average 62% reduction) across diverse few-shot settings while achieving state-of-the-art accuracy—particularly excelling in cross-domain generalization and novel-class adaptation.

Technology Category

Application Category

📝 Abstract
Cross-Domain Few-Shot Semantic Segmentation (CD-FSS) seeks to segment unknown classes in unseen domains using only a few annotated examples. This setting is inherently challenging: source and target domains exhibit substantial distribution shifts, label spaces are disjoint, and support images are scarce--making standard episodic methods unreliable and computationally demanding at test time. To address these constraints, we propose DistillFSS, a framework that embeds support-set knowledge directly into a model's parameters through a teacher--student distillation process. By internalizing few-shot reasoning into a dedicated layer within the student network, DistillFSS eliminates the need for support images at test time, enabling fast, lightweight inference, while allowing efficient extension to novel classes in unseen domains through rapid teacher-driven specialization. Combined with fine-tuning, the approach scales efficiently to large support sets and significantly reduces computational overhead. To evaluate the framework under realistic conditions, we introduce a new CD-FSS benchmark spanning medical imaging, industrial inspection, and remote sensing, with disjoint label spaces and variable support sizes. Experiments show that DistillFSS matches or surpasses state-of-the-art baselines, particularly in multi-class and multi-shot scenarios, while offering substantial efficiency gains. The code is available at https://github.com/pasqualedem/DistillFSS.
Problem

Research questions and friction points this paper is trying to address.

Segment unknown classes in unseen domains with few examples
Address distribution shifts and disjoint label spaces in segmentation
Reduce computational demands and support reliance at test time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding support-set knowledge via teacher-student distillation
Eliminating support images at test time for lightweight inference
Scaling efficiently to large support sets with fine-tuning
🔎 Similar Papers
No similar papers found.