Weakly Supervised Object Segmentation by Background Conditional Divergence

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In specialized imaging domains—such as synthetic aperture sonar (SAS), remote sensing, and biomedical imaging—pixel-level annotations are scarce and prohibitively expensive to acquire. To address this, we propose a binary object segmentation method that operates solely with image-level weak labels (i.e., presence/absence of the target object). Our approach introduces a novel background-conditioned divergence estimation framework to synthesize counterfactual images without relying on pre-trained or generative models. By integrating clustering-guided background substitution and contrastive learning, it effectively disentangles foreground and background representations. The method jointly optimizes sample-wise divergence estimation, background clustering, and weakly supervised training, yielding substantial improvements in segmentation accuracy. Evaluated on SAS and remote sensing benchmarks, it significantly outperforms existing unsupervised methods; its generalizability is further validated on natural image datasets. Notably, the framework requires neither GANs nor large-scale pre-trained models.

Technology Category

Application Category

📝 Abstract
As a computer vision task, automatic object segmentation remains challenging in specialized image domains without massive labeled data, such as synthetic aperture sonar images, remote sensing, biomedical imaging, etc. In any domain, obtaining pixel-wise segmentation masks is expensive. In this work, we propose a method for training a masking network to perform binary object segmentation using weak supervision in the form of image-wise presence or absence of an object of interest, which provides less information but may be obtained more quickly from manual or automatic labeling. A key step in our method is that the segmented objects can be placed into background-only images to create realistic, images of the objects with counterfactual backgrounds. To create a contrast between the original and counterfactual background images, we propose to first cluster the background-only images, and then during learning create counterfactual images that blend objects segmented from their original source backgrounds to backgrounds chosen from a targeted cluster. One term in the training loss is the divergence between these counterfactual images and the real object images with backgrounds of the target cluster. The other term is a supervised loss for background-only images. While an adversarial critic could provide the divergence, we use sample-based divergences. We conduct experiments on side-scan and synthetic aperture sonar in which our approach succeeds compared to previous unsupervised segmentation baselines that were only tested on natural images. Furthermore, to show generality we extend our experiments to natural images, obtaining reasonable performance with our method that avoids pretrained networks, generative networks, and adversarial critics. The basecode for this work can be found at href{GitHub}{https://github.com/bakerhassan/WSOS}.
Problem

Research questions and friction points this paper is trying to address.

Weakly supervised object segmentation without massive labeled data
Reducing cost of pixel-wise segmentation masks using weak supervision
Improving segmentation accuracy with counterfactual background images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses weak supervision for binary object segmentation
Creates counterfactual images with clustered backgrounds
Employs sample-based divergences for training loss
🔎 Similar Papers
No similar papers found.
H
Hassan Baker
Department of Electrical and Computer Engineering, University of Delaware
M
Matthew S. Emigh
Naval Surface Warfare Center Panama City Division
Austin J. Brockmeier
Austin J. Brockmeier
University of Delaware
data sciencemachine learningmachine learning for neuroscienceinformation theoretic learning